This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-05-29 07:04
Elapsed33m15s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0529 07:05:44.669746    4069 up.go:43] Cleaning up any leaked resources from previous cluster
I0529 07:05:44.669781    4069 dumplogs.go:38] /logs/artifacts/08845cce-c04c-11eb-b3db-1ecf15fc999e/kops toolbox dump --name e2e-459b123097-cb70c.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user admin
I0529 07:05:44.687606    4089 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0529 07:05:44.687705    4089 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-459b123097-cb70c.test-cncf-aws.k8s.io" not found
W0529 07:05:45.223558    4069 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0529 07:05:45.223617    4069 down.go:48] /logs/artifacts/08845cce-c04c-11eb-b3db-1ecf15fc999e/kops delete cluster --name e2e-459b123097-cb70c.test-cncf-aws.k8s.io --yes
I0529 07:05:45.244230    4099 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0529 07:05:45.244635    4099 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-459b123097-cb70c.test-cncf-aws.k8s.io" not found
I0529 07:05:45.793996    4069 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/05/29 07:05:45 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0529 07:05:45.802171    4069 http.go:37] curl https://ip.jsb.workers.dev
I0529 07:05:45.888941    4069 up.go:144] /logs/artifacts/08845cce-c04c-11eb-b3db-1ecf15fc999e/kops create cluster --name e2e-459b123097-cb70c.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.1 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=136693071363/debian-10-amd64-20210329-591 --channel=alpha --networking=cilium --container-runtime=containerd --admin-access 34.122.30.33/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-southeast-1a --master-size c5.large
I0529 07:05:45.906408    4109 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0529 07:05:45.906507    4109 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0529 07:05:45.970023    4109 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0529 07:05:46.474076    4109 new_cluster.go:1023]  Cloud Provider ID = aws
... skipping 42 lines ...

I0529 07:06:15.867366    4069 up.go:181] /logs/artifacts/08845cce-c04c-11eb-b3db-1ecf15fc999e/kops validate cluster --name e2e-459b123097-cb70c.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0529 07:06:15.884194    4129 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0529 07:06:15.884291    4129 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-459b123097-cb70c.test-cncf-aws.k8s.io

W0529 07:06:17.589125    4129 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-459b123097-cb70c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:06:27.620586    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:06:37.655873    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:06:47.687017    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:06:57.734396    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:07:07.766145    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:07:17.803166    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:07:27.851059    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:07:37.896876    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:07:47.936280    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:07:57.966489    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:08:08.076185    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:08:18.124092    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:08:28.156435    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:08:38.184974    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:08:48.223584    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:08:58.275466    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:09:08.308579    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:09:18.342118    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:09:28.373998    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:09:38.405865    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 07:09:48.440397    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
W0529 07:09:58.460482    4129 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-459b123097-cb70c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

NODE STATUS
... skipping 7 lines ...
Machine	i-0ed6c3add6e6a7de9				machine "i-0ed6c3add6e6a7de9" has not yet joined cluster
Machine	i-0faef1b2c461b2bc0				machine "i-0faef1b2c461b2bc0" has not yet joined cluster
Pod	kube-system/cilium-9h72w			system-node-critical pod "cilium-9h72w" is not ready (cilium-agent)
Pod	kube-system/coredns-autoscaler-6f594f4c58-kb772	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-kb772" is pending
Pod	kube-system/coredns-f45c4bf76-5xwkz		system-cluster-critical pod "coredns-f45c4bf76-5xwkz" is pending

Validation Failed
W0529 07:10:13.140357    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 16 lines ...
Pod	kube-system/cilium-ph5rg				system-node-critical pod "cilium-ph5rg" is pending
Pod	kube-system/cilium-wth8r				system-node-critical pod "cilium-wth8r" is pending
Pod	kube-system/cilium-wzxqf				system-node-critical pod "cilium-wzxqf" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-kb772		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-kb772" is pending
Pod	kube-system/coredns-f45c4bf76-5xwkz			system-cluster-critical pod "coredns-f45c4bf76-5xwkz" is pending

Validation Failed
W0529 07:10:26.144430    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 14 lines ...
Pod	kube-system/cilium-ph5rg			system-node-critical pod "cilium-ph5rg" is not ready (cilium-agent)
Pod	kube-system/cilium-wth8r			system-node-critical pod "cilium-wth8r" is not ready (cilium-agent)
Pod	kube-system/cilium-wzxqf			system-node-critical pod "cilium-wzxqf" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-kb772	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-kb772" is pending
Pod	kube-system/coredns-f45c4bf76-5xwkz		system-cluster-critical pod "coredns-f45c4bf76-5xwkz" is pending

Validation Failed
W0529 07:10:39.319841    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 9 lines ...
KIND	NAME						MESSAGE
Pod	kube-system/cilium-ph5rg			system-node-critical pod "cilium-ph5rg" is not ready (cilium-agent)
Pod	kube-system/cilium-wth8r			system-node-critical pod "cilium-wth8r" is not ready (cilium-agent)
Pod	kube-system/cilium-wzxqf			system-node-critical pod "cilium-wzxqf" is not ready (cilium-agent)
Pod	kube-system/coredns-autoscaler-6f594f4c58-kb772	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-kb772" is pending

Validation Failed
W0529 07:10:52.389741    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 6 lines ...
ip-172-20-61-32.ap-southeast-1.compute.internal		node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/cilium-wzxqf	system-node-critical pod "cilium-wzxqf" is not ready (cilium-agent)

Validation Failed
W0529 07:11:05.410743    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 6 lines ...
ip-172-20-61-32.ap-southeast-1.compute.internal		node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/cilium-wzxqf	system-node-critical pod "cilium-wzxqf" is not ready (cilium-agent)

Validation Failed
W0529 07:11:18.362855    4129 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-1a	Master	c5.large	1	1	ap-southeast-1a
nodes-ap-southeast-1a	Node	t3.medium	4	4	ap-southeast-1a

... skipping 1212 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 124 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 29 07:14:01.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":1,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
May 29 07:14:02.043: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72
May 29 07:14:02.422: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
STEP: Creating a PVC
May 29 07:14:02.991: INFO: error finding default storageClass : No default storage class found
[AfterEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 29 07:14:02.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pvc-protection-4498" for this suite.
[AfterEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108
... skipping 2 lines ...
S [SKIPPING] in Spec Setup (BeforeEach) [2.105 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114

  error finding default storageClass : No default storage class found

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819
------------------------------
SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 29 07:14:03.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4210" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 29 07:14:03.399: INFO: Only supported for providers [azure] (not aws)
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 29 07:14:03.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-3741" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 29 07:14:04.275: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 90 lines ...
• [SLOW TEST:5.139 seconds]
[sig-instrumentation] Events API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 29 07:14:07.508: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Listing PodDisruptionBudgets for all namespaces
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75
    should list and delete a collection of PodDisruptionBudgets [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 29 07:14:07.908: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 63 lines ...
• [SLOW TEST:7.142 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
May 29 07:14:03.857: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1895" to be "Succeeded or Failed"
May 29 07:14:04.057: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 199.280641ms
May 29 07:14:06.258: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40000692s
May 29 07:14:08.457: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.599767265s
STEP: Saw pod success
May 29 07:14:08.457: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
May 29 07:14:08.657: INFO: Trying to get logs from node ip-172-20-56-44.ap-southeast-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
May 29 07:14:09.067: INFO: Waiting for pod pod-host-path-test to disappear
May 29 07:14:09.267: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 29 07:14:11.037: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 125 lines ...
• [SLOW TEST:9.956 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0529 07:14:01.986040    4849 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 29 07:14:01.986: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
May 29 07:14:02.567: INFO: Waiting up to 5m0s for pod "test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4" in namespace "svcaccounts-3369" to be "Succeeded or Failed"
May 29 07:14:02.764: INFO: Pod "test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 196.313736ms
May 29 07:14:04.958: INFO: Pod "test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39038877s
May 29 07:14:07.152: INFO: Pod "test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.584532747s
May 29 07:14:09.347: INFO: Pod "test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.779673321s
May 29 07:14:11.542: INFO: Pod "test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.974537498s
STEP: Saw pod success
May 29 07:14:11.542: INFO: Pod "test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4" satisfied condition "Succeeded or Failed"
May 29 07:14:11.737: INFO: Trying to get logs from node ip-172-20-54-213.ap-southeast-1.compute.internal pod test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4 container agnhost-container: <nil>
STEP: delete the pod
May 29 07:14:12.130: INFO: Waiting for pod test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4 to disappear
May 29 07:14:12.323: INFO: Pod test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.717 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
May 29 07:14:02.189: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-23e3ebb8-ac37-408b-b1df-ead0752bbe33
STEP: Creating a pod to test consume secrets
May 29 07:14:02.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5" in namespace "projected-4322" to be "Succeeded or Failed"
May 29 07:14:03.132: INFO: Pod "pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5": Phase="Pending", Reason="", readiness=false. Elapsed: 188.572853ms
May 29 07:14:05.321: INFO: Pod "pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377102492s
May 29 07:14:07.512: INFO: Pod "pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567775111s
May 29 07:14:09.700: INFO: Pod "pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.756657363s
May 29 07:14:11.890: INFO: Pod "pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.945999221s
STEP: Saw pod success
May 29 07:14:11.890: INFO: Pod "pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5" satisfied condition "Succeeded or Failed"
May 29 07:14:12.078: INFO: Trying to get logs from node ip-172-20-61-32.ap-southeast-1.compute.internal pod pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5 container secret-volume-test: <nil>
STEP: delete the pod
May 29 07:14:12.532: INFO: Waiting for pod pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5 to disappear
May 29 07:14:12.741: INFO: Pod pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.986 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 29 07:14:13.353: INFO: Only supported for providers [azure] (not aws)
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
May 29 07:14:03.978: INFO: Running '/tmp/kubectl1588897144/kubectl --server=https://api.e2e-459b123097-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7893 cluster-info dump'
May 29 07:14:12.748: INFO: stderr: ""
May 29 07:14:12.749: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1780\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"11e47d04-161d-4157-b541-0dcf395748fd\",\n                \"resourceVersion\": \"724\",\n                \"creationTimestamp\": \"2021-05-29T07:08:40Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-southeast-1a\",\n                    \"kops.k8s.io/instancegroup\": \"master-ap-southeast-1a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-southeast-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.0.128\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.0.127\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.0.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-southeast-1a/i-07708f476b85f31f9\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49347208Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3798380Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45478386818\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3695980Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:25Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:25Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:09:50Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:33Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:09:50Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:33Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:09:50Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:33Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:09:50Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:09:50Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.36.217\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"13.212.113.26\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-13-212-113-26.ap-southeast-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2036c77d52350153d488ceeb9592de\",\n                    \"systemUUID\": \"ec2036c7-7d52-3501-53d4-88ceeb9592de\",\n                    \"bootID\": \"1696d625-ed86-4311-97d3-e0a72b1157aa\",\n                    \"kernelVersion\": \"4.19.0-16-cloud-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 10 (buster)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\"\n                        ],\n                        \"sizeBytes\": 171082409\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                            \"docker.io/cilium/cilium:v1.9.7\"\n                        ],\n                        \"sizeBytes\": 147154186\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 132727503\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 126863837\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64@sha256:e0d7e62864b91b05f02e51ce0ecf8c986270eedb2d1512edff0d43b8660a442e\",\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 121076547\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\"\n                        ],\n                        \"sizeBytes\": 113849868\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1\"\n                        ],\n                        \"sizeBytes\": 112057869\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 51886392\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.1\"\n                        ],\n                        \"sizeBytes\": 25632279\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/cilium/operator@sha256:151834edf9bf52729719ae50f3465a4a512f22e6eb5de84de8499ca19ca571b0\",\n                            \"docker.io/cilium/operator:v1.9.7\"\n                        ],\n                        \"sizeBytes\": 17659429\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\",\n                \"uid\": \"85375be1-aef3-4b9a-b238-f5da9c6c872f\",\n                \"resourceVersion\": \"812\",\n                \"creationTimestamp\": \"2021-05-29T07:10:13Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-southeast-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ap-southeast-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-southeast-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.2.222\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.2.92\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.2.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-southeast-1a/i-0ed6c3add6e6a7de9\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49347208Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3982680Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45478386818\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3880280Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:39Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:39Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:43Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:13Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:43Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:13Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:43Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:13Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:43Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:33Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.54.213\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"54.255.203.155\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-54-255-203-155.ap-southeast-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2ec39c61d7a4948ca856b62f3528b0\",\n                    \"systemUUID\": \"ec2ec39c-61d7-a494-8ca8-56b62f3528b0\",\n                    \"bootID\": \"0a69964b-8bcc-4a5e-a079-444c9d6dcb5e\",\n                    \"kernelVersion\": \"4.19.0-16-cloud-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 10 (buster)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                            \"docker.io/cilium/cilium:v1.9.7\"\n                        ],\n                        \"sizeBytes\": 147154186\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 132727503\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\",\n                \"uid\": \"a5b12367-a9f4-43e4-bb19-1c24c9a4a1b5\",\n                \"resourceVersion\": \"962\",\n                \"creationTimestamp\": \"2021-05-29T07:10:13Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-southeast-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ap-southeast-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-southeast-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.3.3\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.3.212\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.3.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-southeast-1a/i-055bc73821838bf73\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49347208Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3982696Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45478386818\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3880296Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:39Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:39Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:13Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:13Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:13Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:13Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:13Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:13Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:13Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:43Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.56.44\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"13.228.203.244\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-13-228-203-244.ap-southeast-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2d3e0a35b0bd4f6da1041fb0880c15\",\n                    \"systemUUID\": \"ec2d3e0a-35b0-bd4f-6da1-041fb0880c15\",\n                    \"bootID\": \"ed533a18-d8d9-45af-9a26-218ad51249d1\",\n                    \"kernelVersion\": \"4.19.0-16-cloud-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 10 (buster)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                            \"docker.io/cilium/cilium:v1.9.7\"\n                        ],\n                        \"sizeBytes\": 147154186\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 132727503\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                            \"docker.io/coredns/coredns:1.8.3\"\n                        ],\n                        \"sizeBytes\": 12893350\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\",\n                \"uid\": \"6d959c55-4185-4769-bcde-1873fcecde33\",\n                \"resourceVersion\": \"957\",\n                \"creationTimestamp\": \"2021-05-29T07:10:12Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-southeast-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ap-southeast-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-southeast-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.1.250\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.1.177\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.1.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-southeast-1a/i-0faef1b2c461b2bc0\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49347208Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3982680Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45478386818\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3880280Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:39Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:39Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:13Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:12Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:13Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:12Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:13Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:12Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:13Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:32Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.59.92\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"54.169.50.147\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-54-169-50-147.ap-southeast-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec24fbd8c082992ed52c68945de78ab3\",\n                    \"systemUUID\": \"ec24fbd8-c082-992e-d52c-68945de78ab3\",\n                    \"bootID\": \"aeddde17-4c87-420a-90d0-73723a02bee0\",\n                    \"kernelVersion\": \"4.19.0-16-cloud-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 10 (buster)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                            \"docker.io/cilium/cilium:v1.9.7\"\n                        ],\n                        \"sizeBytes\": 147154186\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 132727503\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                            \"docker.io/coredns/coredns:1.8.3\"\n                        ],\n                        \"sizeBytes\": 12893350\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\",\n                \"uid\": \"7bcd160f-3cec-4559-842d-084122c0d55a\",\n                \"resourceVersion\": \"992\",\n                \"creationTimestamp\": \"2021-05-29T07:10:22Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ap-southeast-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ap-southeast-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"ap-southeast-1\",\n                    \"topology.kubernetes.io/zone\": \"ap-southeast-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.4.5\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.4.87\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.4.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///ap-southeast-1a/i-0ec95ccc4d7f93c13\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"49347208Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3982680Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"45478386818\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3880280Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:10:49Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:49Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:23Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:22Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:23Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:22Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:23Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:22Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-05-29T07:11:23Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:42Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.61.32\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"13.250.10.232\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-13-250-10-232.ap-southeast-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec281f96b32e23cf9003433be44020e9\",\n                    \"systemUUID\": \"ec281f96-b32e-23cf-9003-433be44020e9\",\n                    \"bootID\": \"09322596-8fb1-473b-99da-aa9f86c1b38a\",\n                    \"kernelVersion\": \"4.19.0-16-cloud-amd64\",\n                    \"osImage\": \"Debian GNU/Linux 10 (buster)\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.6\",\n                    \"kubeletVersion\": \"v1.21.1\",\n                    \"kubeProxyVersion\": \"v1.21.1\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                            \"docker.io/cilium/cilium:v1.9.7\"\n                        ],\n                        \"sizeBytes\": 147154186\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.1\"\n                        ],\n                        \"sizeBytes\": 132727503\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\"\n                        ],\n                        \"sizeBytes\": 15191740\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 299513\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"230\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv.168377fb852f9dcb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"786d9d37-21f0-48d5-87ad-f6270a89dd0b\",\n                \"resourceVersion\": \"100\",\n                \"creationTimestamp\": \"2021-05-29T07:10:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-8q8dv\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"636\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-8q8dv to ip-172-20-59-92.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:12Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv.168377fc6a9f084b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"61781042-f988-4f19-85b2-2fe5acef4aa0\",\n                \"resourceVersion\": \"121\",\n                \"creationTimestamp\": \"2021-05-29T07:10:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-8q8dv\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"640\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"docker.io/cilium/cilium:v1.9.7\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:16Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv.168377fedff0946c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d3fe5173-5eb3-40ee-9f56-d084394a3d79\",\n                \"resourceVersion\": \"142\",\n                \"creationTimestamp\": \"2021-05-29T07:10:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-8q8dv\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"640\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"docker.io/cilium/cilium:v1.9.7\\\" in 10.558189528s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:27Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv.168377ff4bf223e1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"72f74758-cc7b-4798-86d8-d50c41874a52\",\n                \"resourceVersion\": \"145\",\n                \"creationTimestamp\": \"2021-05-29T07:10:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-8q8dv\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"640\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv.168377ff52ec3a10\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c06b4625-08a0-4848-97cf-81f511e0e134\",\n                \"resourceVersion\": \"146\",\n                \"creationTimestamp\": \"2021-05-29T07:10:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-8q8dv\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"640\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv.168377ff843a5389\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9256833b-3bb2-46e9-972e-d5fa932ea74a\",\n                \"resourceVersion\": \"151\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-8q8dv\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"640\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"docker.io/cilium/cilium:v1.9.7\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv.168377ff87f376f9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"18548245-9a92-4eaf-96df-d1c9c4ef8ef2\",\n                \"resourceVersion\": \"152\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-8q8dv\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"640\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv.168377ff8e3d7148\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e3a7eb21-d25e-485d-95ce-8a6eb37ad466\",\n                \"resourceVersion\": \"153\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-8q8dv\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"640\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377e9f694507d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a15f0b50-5b0c-418b-80fb-401102eb60bb\",\n                \"resourceVersion\": \"53\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"423\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-9h72w to ip-172-20-36-217.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377ee11d7d057\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ff62b96e-ffbd-4ae0-bd72-eafd3e29d00d\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2021-05-29T07:09:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"docker.io/cilium/cilium:v1.9.7\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:15Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377f09618f755\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"acd7e7d3-fcc3-4b8c-b7a3-b8f006900dc4\",\n                \"resourceVersion\": \"65\",\n                \"creationTimestamp\": \"2021-05-29T07:09:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"docker.io/cilium/cilium:v1.9.7\\\" in 10.808779827s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:25Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377f09619e0c0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c1bef6e1-16bd-4816-a7ee-2b40c23ec4af\",\n                \"resourceVersion\": \"68\",\n                \"creationTimestamp\": \"2021-05-29T07:09:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Failed\",\n            \"message\": \"Error: services have not yet been read at least once, cannot construct envvars\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:25Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:26Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377f0bda89c67\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dd51d3e1-4180-4cc9-a14b-bfbd2b6f4983\",\n                \"resourceVersion\": \"75\",\n                \"creationTimestamp\": \"2021-05-29T07:09:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"docker.io/cilium/cilium:v1.9.7\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:26Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:41Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377f43d23c169\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b450d4d2-828a-46d9-adea-a1b3659099e5\",\n                \"resourceVersion\": \"76\",\n                \"creationTimestamp\": \"2021-05-29T07:09:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:41Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377f4447f8ee5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"199d53ae-2041-4501-ab77-0b4abda794dc\",\n                \"resourceVersion\": \"77\",\n                \"creationTimestamp\": \"2021-05-29T07:09:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:41Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377f4784cd33f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f599d70d-f376-48d7-afa0-d5629a320006\",\n                \"resourceVersion\": \"78\",\n                \"creationTimestamp\": \"2021-05-29T07:09:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"docker.io/cilium/cilium:v1.9.7\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:42Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377f47cbc08bf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bd30c163-575c-451a-9b7f-02c39431fcbf\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2021-05-29T07:09:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:42Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377f4823bc9ab\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"359fd0a5-1b92-4a8a-bdf3-5d0eae6254fd\",\n                \"resourceVersion\": \"80\",\n                \"creationTimestamp\": \"2021-05-29T07:09:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:42Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w.168377fbed1d0c63\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0b8073e8-bd04-443a-9dff-9591ac22021b\",\n                \"resourceVersion\": \"120\",\n                \"creationTimestamp\": \"2021-05-29T07:10:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-9h72w\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"446\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Readiness probe failed: Get \\\"http://127.0.0.1:9876/healthz\\\": dial tcp 127.0.0.1:9876: connect: connection refused\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:14Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:14Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n.168377e9f70a00c9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8d426580-7795-4e60-93b6-125f7e87bc69\",\n                \"resourceVersion\": \"55\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n\",\n                \"uid\": \"4ab7253e-855c-4690-80ef-0ad8bc8f92ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"422\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-operator-7cd4557b96-jxr8n to ip-172-20-36-217.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n.168377ee249cde5c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b8134db1-cf56-42dd-8fe5-5d3f0fe5f5cb\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-05-29T07:09:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n\",\n                \"uid\": \"4ab7253e-855c-4690-80ef-0ad8bc8f92ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"448\",\n                \"fieldPath\": \"spec.containers{cilium-operator}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"docker.io/cilium/operator:v1.9.7\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:15Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n.168377f203c9cc52\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"99216a03-bf62-4530-844d-09fa58be408a\",\n                \"resourceVersion\": \"72\",\n                \"creationTimestamp\": \"2021-05-29T07:09:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n\",\n                \"uid\": \"4ab7253e-855c-4690-80ef-0ad8bc8f92ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"448\",\n                \"fieldPath\": \"spec.containers{cilium-operator}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"docker.io/cilium/operator:v1.9.7\\\" in 16.628580084s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:32Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n.168377f20c7e2608\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d74d14e7-48f6-48aa-8a73-76b23fba2f5e\",\n                \"resourceVersion\": \"73\",\n                \"creationTimestamp\": \"2021-05-29T07:09:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n\",\n                \"uid\": \"4ab7253e-855c-4690-80ef-0ad8bc8f92ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"448\",\n                \"fieldPath\": \"spec.containers{cilium-operator}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-operator\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:32Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n.168377f215498704\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6455062a-aee3-4d10-a38a-cdd790d2fe90\",\n                \"resourceVersion\": \"74\",\n                \"creationTimestamp\": \"2021-05-29T07:09:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n\",\n                \"uid\": \"4ab7253e-855c-4690-80ef-0ad8bc8f92ea\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"448\",\n                \"fieldPath\": \"spec.containers{cilium-operator}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-operator\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:32Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-7cd4557b96.168377e9f20935d1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6d26bf1b-8ca1-4c81-b0ad-74b8d60b4748\",\n                \"resourceVersion\": \"56\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-7cd4557b96\",\n                \"uid\": \"dff7677a-5be4-43cb-822e-1f7ff1770ff9\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"406\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-operator-7cd4557b96-jxr8n\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator.168377e9e8c6a282\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c792a71b-7dc2-4231-bd2d-a0a279954652\",\n                \"resourceVersion\": \"45\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator\",\n                \"uid\": \"6c90115a-5541-4c35-bb46-d8c6c48e9626\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"285\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set cilium-operator-7cd4557b96 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg.168377fba5a00067\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8357e3f5-f2a9-4b08-b62f-ea05a1e683d4\",\n                \"resourceVersion\": \"118\",\n                \"creationTimestamp\": \"2021-05-29T07:10:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-ph5rg\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"659\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-ph5rg to ip-172-20-56-44.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:13Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg.168377fc85598585\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d41740f8-6ab2-4101-812d-7729b4008bea\",\n                \"resourceVersion\": \"123\",\n                \"creationTimestamp\": \"2021-05-29T07:10:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-ph5rg\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"661\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"docker.io/cilium/cilium:v1.9.7\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:17Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg.168377fefbbf5fc3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b645cac9-183a-4544-a077-97895cb1be31\",\n                \"resourceVersion\": \"143\",\n                \"creationTimestamp\": \"2021-05-29T07:10:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-ph5rg\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"661\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"docker.io/cilium/cilium:v1.9.7\\\" in 10.576286431s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:27Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg.168377ff66866d48\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1f649753-b0cc-41d7-bab6-6596b2810664\",\n                \"resourceVersion\": \"147\",\n                \"creationTimestamp\": \"2021-05-29T07:10:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-ph5rg\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"661\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg.168377ff71addcf0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cc1f5ebe-e9bf-40dc-8f65-07aa6634b131\",\n                \"resourceVersion\": \"150\",\n                \"creationTimestamp\": \"2021-05-29T07:10:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-ph5rg\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"661\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg.168377ffacb76ac6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bfb1ebbe-6a99-411b-9231-650afde4283f\",\n                \"resourceVersion\": \"157\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-ph5rg\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"661\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"docker.io/cilium/cilium:v1.9.7\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg.168377ffb1201459\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8bd625e1-30f3-4000-abdd-1bd0bfa8afe3\",\n                \"resourceVersion\": \"158\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-ph5rg\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"661\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg.168377ffb843f037\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bd5bdbf7-949b-4807-9468-721aadc0c313\",\n                \"resourceVersion\": \"159\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-ph5rg\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"661\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r.168377fb95c814aa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6ec2423a-612e-489f-887a-03aee207d263\",\n                \"resourceVersion\": \"110\",\n                \"creationTimestamp\": \"2021-05-29T07:10:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wth8r\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"647\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-wth8r to ip-172-20-54-213.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:13Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r.168377fc8502e4e0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"91565c44-fbed-438c-ab36-8423830ea7b2\",\n                \"resourceVersion\": \"122\",\n                \"creationTimestamp\": \"2021-05-29T07:10:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wth8r\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"650\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"docker.io/cilium/cilium:v1.9.7\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:17Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r.168377ff055ede0b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6faf5d76-4fb5-422a-8912-8541ceab6234\",\n                \"resourceVersion\": \"144\",\n                \"creationTimestamp\": \"2021-05-29T07:10:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wth8r\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"650\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"docker.io/cilium/cilium:v1.9.7\\\" in 10.74342416s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:27Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r.168377ff6abab23f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c472e16d-f73c-4bc0-a8fe-52e7e285b3ac\",\n                \"resourceVersion\": \"148\",\n                \"creationTimestamp\": \"2021-05-29T07:10:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wth8r\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"650\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r.168377ff7126b224\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"12ceaf70-1b84-4f4a-84f7-bd7d6a7ea301\",\n                \"resourceVersion\": \"149\",\n                \"creationTimestamp\": \"2021-05-29T07:10:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wth8r\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"650\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r.168377ff945bff56\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c183eb1a-34b0-4bbb-a86b-74e9b91fb549\",\n                \"resourceVersion\": \"154\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wth8r\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"650\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"docker.io/cilium/cilium:v1.9.7\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r.168377ff99a75169\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1942976c-6e27-4860-8a34-0bc229d0649b\",\n                \"resourceVersion\": \"155\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wth8r\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"650\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r.168377ffa62d1f0c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0641672e-f772-4ae7-831b-2c35809beb12\",\n                \"resourceVersion\": \"156\",\n                \"creationTimestamp\": \"2021-05-29T07:10:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wth8r\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"650\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.168377fdda7a527c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7182a13c-cc38-4eb1-9f66-e497574d9dfd\",\n                \"resourceVersion\": \"136\",\n                \"creationTimestamp\": \"2021-05-29T07:10:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"698\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-wzxqf to ip-172-20-61-32.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:22Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.168377febb399c54\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f20637f6-b7f3-4ef6-a022-0c9ae7891367\",\n                \"resourceVersion\": \"140\",\n                \"creationTimestamp\": \"2021-05-29T07:10:26Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"docker.io/cilium/cilium:v1.9.7\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:26Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:26Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.168378013dfde17c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6847577b-8a16-4801-a3b9-b5c31e388af7\",\n                \"resourceVersion\": \"164\",\n                \"creationTimestamp\": \"2021-05-29T07:10:37Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"docker.io/cilium/cilium:v1.9.7\\\" in 10.78379476s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:37Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:37Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.16837801aae6d27d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"312b4082-9b1f-4102-bb56-bdc64ff66576\",\n                \"resourceVersion\": \"165\",\n                \"creationTimestamp\": \"2021-05-29T07:10:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:39Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.16837801b6f122d0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d19fbf42-c894-4ca2-a5cd-22821aad8bd1\",\n                \"resourceVersion\": \"166\",\n                \"creationTimestamp\": \"2021-05-29T07:10:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:39Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.16837801db9b4284\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8e293af7-b359-4e31-b8e0-b85186ae1a7b\",\n                \"resourceVersion\": \"167\",\n                \"creationTimestamp\": \"2021-05-29T07:10:40Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"docker.io/cilium/cilium:v1.9.7\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:40Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:40Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.16837801df7553ee\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f4125575-861e-4484-a200-112c49324b2c\",\n                \"resourceVersion\": \"168\",\n                \"creationTimestamp\": \"2021-05-29T07:10:40Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:40Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:40Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.16837801e68d26d7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7a552b58-3fb6-464d-84c2-a694ec0664d4\",\n                \"resourceVersion\": \"169\",\n                \"creationTimestamp\": \"2021-05-29T07:10:40Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:40Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:40Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf.16837803c5d65fe2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f43abac9-28a9-4a2c-9881-d4b7dac473b4\",\n                \"resourceVersion\": \"176\",\n                \"creationTimestamp\": \"2021-05-29T07:10:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wzxqf\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"703\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Readiness probe failed: Get \\\"http://127.0.0.1:9876/healthz\\\": dial tcp 127.0.0.1:9876: connect: connection refused\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:48Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:48Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.168377e9f2f604e9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6095e32c-75c8-4e02-a7eb-b6f6aae5203b\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"284\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-9h72w\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.168377fb83975a8c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"58d05c8f-a311-41b9-af45-f8e6cfb6d24f\",\n                \"resourceVersion\": \"98\",\n                \"creationTimestamp\": \"2021-05-29T07:10:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"439\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-8q8dv\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:12Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.168377fb95377b93\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a3fb04e3-ac6f-4078-91b6-4c4ac52f547a\",\n                \"resourceVersion\": \"108\",\n                \"creationTimestamp\": \"2021-05-29T07:10:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"639\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-wth8r\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:13Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.168377fba4bf85aa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"10519f1e-d2f3-442b-bbec-503f1af9fbf8\",\n                \"resourceVersion\": \"117\",\n                \"creationTimestamp\": \"2021-05-29T07:10:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"652\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-ph5rg\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:13Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.168377fdd784bd7a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d13dd2aa-6a46-4316-b05b-f629ad0bc11c\",\n                \"resourceVersion\": \"134\",\n                \"creationTimestamp\": \"2021-05-29T07:10:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"664\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-wzxqf\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:22Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772.168377e9f17788d7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"57f5a04e-131d-48cf-b6a8-4f23f914cc5d\",\n                \"resourceVersion\": \"89\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"421\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:54Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772.168377fb81f528d7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d0451d2b-f44c-4ab4-acd2-db7acc7ce84f\",\n                \"resourceVersion\": \"95\",\n                \"creationTimestamp\": \"2021-05-29T07:10:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"438\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:12Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:12Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772.168377fddde7ed5f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c0c2573b-5ad0-4090-81eb-1be67040c0b7\",\n                \"resourceVersion\": \"161\",\n                \"creationTimestamp\": \"2021-05-29T07:10:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:23Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:33Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772.1683780286f6d509\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c27df401-b6d6-4cba-ab35-20df56c7cb3b\",\n                \"resourceVersion\": \"172\",\n                \"creationTimestamp\": \"2021-05-29T07:10:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"707\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-6f594f4c58-kb772 to ip-172-20-61-32.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:43Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772.16837804b04fc154\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d64d0090-7980-438e-8579-5a1364d50320\",\n                \"resourceVersion\": \"179\",\n                \"creationTimestamp\": \"2021-05-29T07:10:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"807\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:52Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772.168378054ee39a17\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"803c061a-e356-4525-aafb-1493bbb1bf98\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2021-05-29T07:10:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"807\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\" in 2.660472411s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:54Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772.168378055cf5859e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"40c17639-2935-4655-983f-260e92f1b189\",\n                \"resourceVersion\": \"181\",\n                \"creationTimestamp\": \"2021-05-29T07:10:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"807\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772.16837805648fb8fe\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"486fdb14-4f83-4f99-93ee-3bb74c45062f\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2021-05-29T07:10:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"807\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58.168377e9efa15801\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"292df172-1f9b-4189-b3a6-00b424ef412c\",\n                \"resourceVersion\": \"54\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-6f594f4c58\",\n                \"uid\": \"dc9609e5-a628-4239-80da-55dba3177857\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"407\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-6f594f4c58-kb772\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.168377e9e8be7d02\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6bc965f8-5e96-4359-96ad-91f9a26e89c2\",\n                \"resourceVersion\": \"43\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"1c1b302a-c00d-4629-ae8d-c1f5fd3bcc5c\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"332\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-6f594f4c58 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz.168377e9f5348b23\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e95aef5b-9a43-45ff-bcc7-b434c6c368ac\",\n                \"resourceVersion\": \"90\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"424\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:54Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz.168377fb833d4684\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9380c9a2-ec8e-4ffb-b3bd-54fb7eedbe86\",\n                \"resourceVersion\": \"97\",\n                \"creationTimestamp\": \"2021-05-29T07:10:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"449\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:12Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:12Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz.168377fdde8ecb46\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"41f432b5-b4db-41c0-8cf7-eea727cc7005\",\n                \"resourceVersion\": \"139\",\n                \"creationTimestamp\": \"2021-05-29T07:10:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"638\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:23Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:23Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz.168378006e687714\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"57205991-4ffd-443d-b052-4aba7b0d9daf\",\n                \"resourceVersion\": \"163\",\n                \"creationTimestamp\": \"2021-05-29T07:10:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"708\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-f45c4bf76-5xwkz to ip-172-20-59-92.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:34Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz.168378024564b706\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fa933895-7f35-473d-a4a2-5f93fde3cf46\",\n                \"resourceVersion\": \"170\",\n                \"creationTimestamp\": \"2021-05-29T07:10:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"768\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"coredns/coredns:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:41Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz.16837803c2ca7f4e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cc244abe-6afc-412d-8fa6-2c34e73495ed\",\n                \"resourceVersion\": \"174\",\n                \"creationTimestamp\": \"2021-05-29T07:10:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"768\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"coredns/coredns:1.8.3\\\" in 6.398769683s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:48Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz.16837803ce8f7a49\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"72924c34-5270-4249-a5ec-f4f48e1c3728\",\n                \"resourceVersion\": \"177\",\n                \"creationTimestamp\": \"2021-05-29T07:10:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"768\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:48Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz.16837803d5bc8e26\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"71e628f4-a062-4e5f-8e00-b7c1c95bda79\",\n                \"resourceVersion\": \"178\",\n                \"creationTimestamp\": \"2021-05-29T07:10:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"768\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:48Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hwsqm.168378057680a291\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8ccd213c-de70-4464-9235-182f448793b4\",\n                \"resourceVersion\": \"185\",\n                \"creationTimestamp\": \"2021-05-29T07:10:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hwsqm\",\n                \"uid\": \"1eb9b022-5c7c-4c63-b30b-259f19043328\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"879\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-f45c4bf76-hwsqm to ip-172-20-56-44.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hwsqm.16837805a5fc2cf4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"65fa31b2-9d73-4a70-9624-7ec3812116af\",\n                \"resourceVersion\": \"186\",\n                \"creationTimestamp\": \"2021-05-29T07:10:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hwsqm\",\n                \"uid\": \"1eb9b022-5c7c-4c63-b30b-259f19043328\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"882\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"coredns/coredns:1.8.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:56Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hwsqm.1683780712bedd36\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"aedc3a40-d291-4194-8a50-675af1835cdd\",\n                \"resourceVersion\": \"187\",\n                \"creationTimestamp\": \"2021-05-29T07:11:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hwsqm\",\n                \"uid\": \"1eb9b022-5c7c-4c63-b30b-259f19043328\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"882\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"coredns/coredns:1.8.3\\\" in 6.119640784s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:11:02Z\",\n            \"lastTimestamp\": \"2021-05-29T07:11:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hwsqm.168378071b8c6482\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d5e69a4c-3ef2-414c-976f-f8de8e212f37\",\n                \"resourceVersion\": \"188\",\n                \"creationTimestamp\": \"2021-05-29T07:11:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hwsqm\",\n                \"uid\": \"1eb9b022-5c7c-4c63-b30b-259f19043328\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"882\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:11:02Z\",\n            \"lastTimestamp\": \"2021-05-29T07:11:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hwsqm.1683780724270dc2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"95d92313-e778-4332-939f-2bd35bf33a48\",\n                \"resourceVersion\": \"189\",\n                \"creationTimestamp\": \"2021-05-29T07:11:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76-hwsqm\",\n                \"uid\": \"1eb9b022-5c7c-4c63-b30b-259f19043328\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"882\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:11:02Z\",\n            \"lastTimestamp\": \"2021-05-29T07:11:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76.168377e9ef9ce14e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"87e31b56-b2b3-4080-8ab8-f5e8d655e9e0\",\n                \"resourceVersion\": \"51\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76\",\n                \"uid\": \"e95d5259-ec04-4bc4-944c-d69065836962\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"408\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-f45c4bf76-5xwkz\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76.1683780575f5691a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"67d35acb-beec-4edb-88a7-4ccb6b411b02\",\n                \"resourceVersion\": \"184\",\n                \"creationTimestamp\": \"2021-05-29T07:10:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-f45c4bf76\",\n                \"uid\": \"e95d5259-ec04-4bc4-944c-d69065836962\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"877\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-f45c4bf76-hwsqm\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.168377e9e9001fc0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"38f66e4e-e477-40b9-b737-55735a2cc472\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"0c94a181-2742-4032-916f-d98aeb6fb65e\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"325\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-f45c4bf76 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.1683780575327ff5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8f2475e5-6f47-4ea6-8d91-e478f2dd508a\",\n                \"resourceVersion\": \"183\",\n                \"creationTimestamp\": \"2021-05-29T07:10:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"0c94a181-2742-4032-916f-d98aeb6fb65e\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"875\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-f45c4bf76 to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"lastTimestamp\": \"2021-05-29T07:10:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5f98b58844-2t8db.168377e9f0959e27\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"94a180f9-0332-460c-af4b-9af00fb21aab\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5f98b58844-2t8db\",\n                \"uid\": \"f5b550bc-8ee0-40a0-8bb2-0bf80d4bd2f5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"420\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-5f98b58844-2t8db to ip-172-20-36-217.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5f98b58844-2t8db.168377ee265698e2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"740430be-9081-42a8-891e-11cfaabb2e49\",\n                \"resourceVersion\": \"69\",\n                \"creationTimestamp\": \"2021-05-29T07:09:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5f98b58844-2t8db\",\n                \"uid\": \"f5b550bc-8ee0-40a0-8bb2-0bf80d4bd2f5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"428\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:15Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:29Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5f98b58844-2t8db.168377ee26577f47\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"519e1429-18fd-432d-acbd-5c39c688703d\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-05-29T07:09:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5f98b58844-2t8db\",\n                \"uid\": \"f5b550bc-8ee0-40a0-8bb2-0bf80d4bd2f5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"428\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Failed\",\n            \"message\": \"Error: services have not yet been read at least once, cannot construct envvars\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:15Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:15Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5f98b58844-2t8db.168377f1791c4ed1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"03677ee6-51e1-4aee-ae40-cce44bd91660\",\n                \"resourceVersion\": \"70\",\n                \"creationTimestamp\": \"2021-05-29T07:09:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5f98b58844-2t8db\",\n                \"uid\": \"f5b550bc-8ee0-40a0-8bb2-0bf80d4bd2f5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"428\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:29Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5f98b58844-2t8db.168377f181a02bce\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"95a54dd8-4fd8-4a9c-bc2c-346aa882d64b\",\n                \"resourceVersion\": \"71\",\n                \"creationTimestamp\": \"2021-05-29T07:09:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5f98b58844-2t8db\",\n                \"uid\": \"f5b550bc-8ee0-40a0-8bb2-0bf80d4bd2f5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"428\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:29Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5f98b58844.168377e9ef5e22bf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f175baa0-0263-4382-8c63-665a9bd103d4\",\n                \"resourceVersion\": \"47\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-5f98b58844\",\n                \"uid\": \"648b2167-6e11-4175-9631-1a77bd455ffe\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"405\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-5f98b58844-2t8db\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.168377e9e8c32898\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f376b818-33d8-445d-9475-896374138a1e\",\n                \"resourceVersion\": \"44\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"fa30bbf6-8fb4-423f-b16f-5216e33b73eb\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"368\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-5f98b58844 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dd285723f0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e2d28065-d908-457b-ba3b-2d0a7eb0bcda\",\n                \"resourceVersion\": \"18\",\n                \"creationTimestamp\": \"2021-05-29T07:08:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"cca7e3b1e6ce0b382773e48432e2c8a9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dfa1c77a47\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"88994124-2147-4942-b373-8556290e31c1\",\n                \"resourceVersion\": \"30\",\n                \"creationTimestamp\": \"2021-05-29T07:08:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"cca7e3b1e6ce0b382773e48432e2c8a9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\" in 10.627325231s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:13Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dfac7ea754\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"99c31cfc-8d82-4969-af3e-fd9cca7682fc\",\n                \"resourceVersion\": \"31\",\n                \"creationTimestamp\": \"2021-05-29T07:08:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"cca7e3b1e6ce0b382773e48432e2c8a9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:13Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dfb2cfbb04\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5c6d1d83-e57f-4c57-80ff-193a4af8491c\",\n                \"resourceVersion\": \"32\",\n                \"creationTimestamp\": \"2021-05-29T07:08:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"cca7e3b1e6ce0b382773e48432e2c8a9\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:13Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dd4245f858\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cdbd57ff-1856-4135-b69e-cf7e021ac3bb\",\n                \"resourceVersion\": \"22\",\n                \"creationTimestamp\": \"2021-05-29T07:08:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"47513f413ddb883da8fedfe162665fc4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dfff39cfd1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"26a6a927-4a90-4468-8ed7-ab3ad2e7a804\",\n                \"resourceVersion\": \"36\",\n                \"creationTimestamp\": \"2021-05-29T07:08:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"47513f413ddb883da8fedfe162665fc4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\\\" in 11.760019603s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal.168377e0042db02f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f45dadd7-7bea-4d4a-9971-e417f92ec2a7\",\n                \"resourceVersion\": \"37\",\n                \"creationTimestamp\": \"2021-05-29T07:08:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"47513f413ddb883da8fedfe162665fc4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal.168377e00cfd5685\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a656b0a0-0bc6-4f27-9965-45f4839a061f\",\n                \"resourceVersion\": \"38\",\n                \"creationTimestamp\": \"2021-05-29T07:08:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"47513f413ddb883da8fedfe162665fc4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-hqjvz.168377f640bea589\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e7d0e68f-3013-4b4f-8401-0d03dd40e0ff\",\n                \"resourceVersion\": \"84\",\n                \"creationTimestamp\": \"2021-05-29T07:09:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-hqjvz\",\n                \"uid\": \"76a323e2-8007-4d93-b828-6526380ab2c1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"551\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-hqjvz to ip-172-20-36-217.ap-southeast-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-hqjvz.168377f65ce50eb9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2aad2748-e384-4dee-ade6-48cf1314ab8f\",\n                \"resourceVersion\": \"85\",\n                \"creationTimestamp\": \"2021-05-29T07:09:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-hqjvz\",\n                \"uid\": \"76a323e2-8007-4d93-b828-6526380ab2c1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"553\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-hqjvz.168377f6600111af\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cf79746a-b530-4417-9219-9046df1fa894\",\n                \"resourceVersion\": \"86\",\n                \"creationTimestamp\": \"2021-05-29T07:09:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-hqjvz\",\n                \"uid\": \"76a323e2-8007-4d93-b828-6526380ab2c1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"553\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-hqjvz.168377f666ee080c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9982b33b-732d-43ea-88db-22326df96e8f\",\n                \"resourceVersion\": \"87\",\n                \"creationTimestamp\": \"2021-05-29T07:09:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-hqjvz\",\n                \"uid\": \"76a323e2-8007-4d93-b828-6526380ab2c1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"553\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.168377f6d3315021\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"39a92f84-228e-498f-a109-18e0a8ec6593\",\n                \"resourceVersion\": \"88\",\n                \"creationTimestamp\": \"2021-05-29T07:09:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"32dd48d7-7d8f-437b-be82-752e2dcc0890\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"562\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-36-217_a7342753-f10b-4bdd-a9b6-6019ecfe54d6 became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-36-217_a7342753-f10b-4bdd-a9b6-6019ecfe54d6\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:52Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.168377f63f64f63f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"18784e0b-fb5b-4919-ac48-7287827126c4\",\n                \"resourceVersion\": \"83\",\n                \"creationTimestamp\": \"2021-05-29T07:09:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"179e8ba4-1f3f-4f62-8628-6194c8dfab00\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"426\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-hqjvz\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"lastTimestamp\": \"2021-05-29T07:09:50Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dd3c1b0544\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7f70af49-e9be-4e21-854c-ffde1d09b0a9\",\n                \"resourceVersion\": \"39\",\n                \"creationTimestamp\": \"2021-05-29T07:08:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"f2fe32d4a75cc1f5fa1a3f919a3e3b23\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:26Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal.168377ddc8e6c7b6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"54a19d6a-3d3b-4707-a29c-508b71a76a8b\",\n                \"resourceVersion\": \"40\",\n                \"creationTimestamp\": \"2021-05-29T07:08:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"f2fe32d4a75cc1f5fa1a3f919a3e3b23\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:26Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal.168377ddd8fd1ac5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0a16253d-0fc6-4a07-8cc3-7391a688d4c0\",\n                \"resourceVersion\": \"41\",\n                \"creationTimestamp\": \"2021-05-29T07:08:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"f2fe32d4a75cc1f5fa1a3f919a3e3b23\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:26Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal.168377ddd90554fd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"389f2b4b-23e8-468c-8d8c-0ed5f02e5893\",\n                \"resourceVersion\": \"27\",\n                \"creationTimestamp\": \"2021-05-29T07:08:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"f2fe32d4a75cc1f5fa1a3f919a3e3b23\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dde138bac9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2cbdbb21-fde0-46a0-814c-7bcd4944fc08\",\n                \"resourceVersion\": \"28\",\n                \"creationTimestamp\": \"2021-05-29T07:08:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"f2fe32d4a75cc1f5fa1a3f919a3e3b23\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal.168377de027b8822\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b9c28394-508b-4475-aa34-2595f0b5c275\",\n                \"resourceVersion\": \"29\",\n                \"creationTimestamp\": \"2021-05-29T07:08:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"f2fe32d4a75cc1f5fa1a3f919a3e3b23\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:06Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dd285c43d2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6a997a7b-86b3-4fc0-b931-ff1e8d232ae8\",\n                \"resourceVersion\": \"19\",\n                \"creationTimestamp\": \"2021-05-29T07:08:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"da789bd2fa0f396ff7b20d0b5b62759d\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dfe0ceb54c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9cf210f5-3513-426e-8545-eadedca6c0f6\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-05-29T07:08:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"da789bd2fa0f396ff7b20d0b5b62759d\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\\\" in 11.684429896s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dfe4715fb2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"acef592f-beba-461a-80da-2b360aab6abc\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-05-29T07:08:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"da789bd2fa0f396ff7b20d0b5b62759d\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dfeb71d1fa\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"61a8cb02-1f29-46bc-8153-ace71171a84f\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-05-29T07:08:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"da789bd2fa0f396ff7b20d0b5b62759d\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.168377e676c15ca4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"43e4677b-9ded-476a-b4c0-805d824d1817\",\n                \"resourceVersion\": \"6\",\n                \"creationTimestamp\": \"2021-05-29T07:08:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"84a4e303-7b68-47c2-a28e-1e1351d7342a\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"219\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-36-217_f8cf9af0-2445-4aa1-9d96-5ef57db7212a became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:42Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-36-217.ap-southeast-1.compute.internal.168377dd41290b3a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8b3a9628-5552-4558-91f6-fcc105f81690\",\n                \"resourceVersion\": \"21\",\n                \"creationTimestamp\": \"2021-05-29T07:08:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"c0ae5c27345cf6526df59af467bd2217\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-36-217.ap-southeast-1.compute.internal.168377ddc94567eb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0f7033f7-2158-4c89-a9d8-5858a6122450\",\n                \"resourceVersion\": \"24\",\n                \"creationTimestamp\": \"2021-05-29T07:08:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"c0ae5c27345cf6526df59af467bd2217\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-36-217.ap-southeast-1.compute.internal.168377ddd4d77e30\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9f183a1e-b4d9-4914-ac00-b052e4d61034\",\n                \"resourceVersion\": \"25\",\n                \"creationTimestamp\": \"2021-05-29T07:08:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"uid\": \"c0ae5c27345cf6526df59af467bd2217\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.168377e69162b974\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3bcecc28-5682-44c6-88c5-e52d81d79329\",\n                \"resourceVersion\": \"9\",\n                \"creationTimestamp\": \"2021-05-29T07:08:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"e7eae44c-1a41-4c8b-909d-1a34dbde9706\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"221\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-36-217_dafdf34c-5bc3-4dc7-b5c5-63cae420210e became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-05-29T07:08:42Z\",\n            \"lastTimestamp\": \"2021-05-29T07:08:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1806\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1814\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0e6d7905-a762-4786-b3a1-a09c39afaa94\",\n                \"resourceVersion\": \"327\",\n                \"creationTimestamp\": \"2021-05-29T07:08:45Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1826\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                \"resourceVersion\": \"977\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-29T07:08:44Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"networking.cilium.io\",\n                    \"addon.kops.k8s.io/version\": \"1.9.4-kops.1\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"role.kubernetes.io/networking\": \"1\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"networking.cilium.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.9.4-kops.1\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-app\\\":\\\"cilium\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"role.kubernetes.io/networking\\\":\\\"1\\\"},\\\"name\\\":\\\"cilium\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"cilium\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"cilium\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"requiredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"cilium\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"--config-dir=/tmp/cilium/config-map\\\"],\\\"command\\\":[\\\"cilium-agent\\\"],\\\"env\\\":[{\\\"name\\\":\\\"K8S_NODE_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"spec.nodeName\\\"}}},{\\\"name\\\":\\\"CILIUM_K8S_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}},{\\\"name\\\":\\\"CILIUM_FLANNEL_MASTER_DEVICE\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"flannel-master-device\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"CILIUM_FLANNEL_UNINSTALL_ON_EXIT\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"flannel-uninstall-on-exit\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"CILIUM_CLUSTERMESH_CONFIG\\\",\\\"value\\\":\\\"/var/lib/cilium/clustermesh/\\\"},{\\\"name\\\":\\\"CILIUM_CNI_CHAINING_MODE\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"cni-chaining-mode\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"CILIUM_CUSTOM_CNI_CONF\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"custom-cni-conf\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\\\"},{\\\"name\\\":\\\"KUBERNETES_SERVICE_PORT\\\",\\\"value\\\":\\\"443\\\"}],\\\"image\\\":\\\"docker.io/cilium/cilium:v1.9.7\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"lifecycle\\\":{\\\"postStart\\\":{\\\"exec\\\":{\\\"command\\\":[\\\"/cni-install.sh\\\"]}},\\\"preStop\\\":{\\\"exec\\\":{\\\"command\\\":[\\\"/cni-uninstall.sh\\\"]}}},\\\"livenessProbe\\\":{\\\"failureThreshold\\\":10,\\\"httpGet\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"httpHeaders\\\":[{\\\"name\\\":\\\"brief\\\",\\\"value\\\":\\\"true\\\"}],\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":9876,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":120,\\\"periodSeconds\\\":30,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"cilium-agent\\\",\\\"readinessProbe\\\":{\\\"failureThreshold\\\":3,\\\"httpGet\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"httpHeaders\\\":[{\\\"name\\\":\\\"brief\\\",\\\"value\\\":\\\"true\\\"}],\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":9876,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":5,\\\"periodSeconds\\\":30,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"25m\\\",\\\"memory\\\":\\\"128Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_ADMIN\\\",\\\"SYS_MODULE\\\"]},\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/sys/fs/bpf\\\",\\\"name\\\":\\\"bpf-maps\\\"},{\\\"mountPath\\\":\\\"/var/run/cilium\\\",\\\"name\\\":\\\"cilium-run\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cni-path\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"etc-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cilium/clustermesh\\\",\\\"name\\\":\\\"clustermesh-secrets\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/tmp/cilium/config-map\\\",\\\"name\\\":\\\"cilium-config-path\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\"}]}],\\\"hostNetwork\\\":true,\\\"initContainers\\\":[{\\\"command\\\":[\\\"/init-container.sh\\\"],\\\"env\\\":[{\\\"name\\\":\\\"CILIUM_ALL_STATE\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"clean-cilium-state\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"CILIUM_BPF_STATE\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"clean-cilium-bpf-state\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"CILIUM_WAIT_BPF_MOUNT\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"wait-bpf-mount\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}}],\\\"image\\\":\\\"docker.io/cilium/cilium:v1.9.7\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"clean-cilium-state\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"100Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"100Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_ADMIN\\\"]},\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/sys/fs/bpf\\\",\\\"mountPropagation\\\":\\\"HostToContainer\\\",\\\"name\\\":\\\"bpf-maps\\\"},{\\\"mountPath\\\":\\\"/var/run/cilium\\\",\\\"name\\\":\\\"cilium-run\\\"}]}],\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"restartPolicy\\\":\\\"Always\\\",\\\"serviceAccount\\\":\\\"cilium\\\",\\\"serviceAccountName\\\":\\\"cilium\\\",\\\"terminationGracePeriodSeconds\\\":1,\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/var/run/cilium\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cilium-run\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/sys/fs/bpf\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"bpf-maps\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/opt/cni/bin\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-path\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"etc-cni-netd\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"name\\\":\\\"clustermesh-secrets\\\",\\\"secret\\\":{\\\"defaultMode\\\":420,\\\"optional\\\":true,\\\"secretName\\\":\\\"cilium-clustermesh\\\"}},{\\\"configMap\\\":{\\\"name\\\":\\\"cilium-config\\\"},\\\"name\\\":\\\"cilium-config-path\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"cilium\",\n                        \"kubernetes.io/cluster-service\": \"true\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"cilium\",\n                            \"kubernetes.io/cluster-service\": \"true\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cilium-run\",\n                                \"hostPath\": {\n                                    \"path\": \"/var/run/cilium\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"hostPath\": {\n                                    \"path\": \"/sys/fs/bpf\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"hostPath\": {\n                                    \"path\": \"/opt/cni/bin\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"secret\": {\n                                    \"secretName\": \"cilium-clustermesh\",\n                                    \"defaultMode\": 420,\n                                    \"optional\": true\n                                }\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"configMap\": {\n                                    \"name\": \"cilium-config\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"initContainers\": [\n                            {\n                                \"name\": \"clean-cilium-state\",\n                                \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                                \"command\": [\n                                    \"/init-container.sh\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"CILIUM_ALL_STATE\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"clean-cilium-state\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_BPF_STATE\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"clean-cilium-bpf-state\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"wait-bpf-mount\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"100Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"100Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"bpf-maps\",\n                                        \"mountPath\": \"/sys/fs/bpf\",\n                                        \"mountPropagation\": \"HostToContainer\"\n                                    },\n                                    {\n                                        \"name\": \"cilium-run\",\n                                        \"mountPath\": \"/var/run/cilium\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_ADMIN\"\n                                        ]\n                                    },\n                                    \"privileged\": true\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"cilium-agent\",\n                                \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                                \"command\": [\n                                    \"cilium-agent\"\n                                ],\n                                \"args\": [\n                                    \"--config-dir=/tmp/cilium/config-map\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"K8S_NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_FLANNEL_MASTER_DEVICE\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"flannel-master-device\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_FLANNEL_UNINSTALL_ON_EXIT\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"flannel-uninstall-on-exit\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                        \"value\": \"/var/lib/cilium/clustermesh/\"\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"cni-chaining-mode\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"custom-cni-conf\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                        \"value\": \"443\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"25m\",\n                                        \"memory\": \"128Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"bpf-maps\",\n                                        \"mountPath\": \"/sys/fs/bpf\"\n                                    },\n                                    {\n                                        \"name\": \"cilium-run\",\n                                        \"mountPath\": \"/var/run/cilium\"\n                                    },\n                                    {\n                                        \"name\": \"cni-path\",\n                                        \"mountPath\": \"/host/opt/cni/bin\"\n                                    },\n                                    {\n                                        \"name\": \"etc-cni-netd\",\n                                        \"mountPath\": \"/host/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"clustermesh-secrets\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                                    },\n                                    {\n                                        \"name\": \"cilium-config-path\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/tmp/cilium/config-map\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9876,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\",\n                                        \"httpHeaders\": [\n                                            {\n                                                \"name\": \"brief\",\n                                                \"value\": \"true\"\n                                            }\n                                        ]\n                                    },\n                                    \"initialDelaySeconds\": 120,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 30,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 10\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9876,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\",\n                                        \"httpHeaders\": [\n                                            {\n                                                \"name\": \"brief\",\n                                                \"value\": \"true\"\n                                            }\n                                        ]\n                                    },\n                                    \"initialDelaySeconds\": 5,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 30,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"lifecycle\": {\n                                    \"postStart\": {\n                                        \"exec\": {\n                                            \"command\": [\n                                                \"/cni-install.sh\"\n                                            ]\n                                        }\n                                    },\n                                    \"preStop\": {\n                                        \"exec\": {\n                                            \"command\": [\n                                                \"/cni-uninstall.sh\"\n                                            ]\n                                        }\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_ADMIN\",\n                                            \"SYS_MODULE\"\n                                        ]\n                                    },\n                                    \"privileged\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 1,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"cilium\",\n                        \"serviceAccount\": \"cilium\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"labelSelector\": {\n                                            \"matchExpressions\": [\n                                                {\n                                                    \"key\": \"k8s-app\",\n                                                    \"operator\": \"In\",\n                                                    \"values\": [\n                                                        \"cilium\"\n                                                    ]\n                                                }\n                                            ]\n                                        },\n                                        \"topologyKey\": \"kubernetes.io/hostname\"\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 5,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 5,\n                \"numberReady\": 5,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 5,\n                \"numberAvailable\": 5\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"179e8ba4-1f3f-4f62-8628-6194c8dfab00\",\n                \"resourceVersion\": \"561\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-29T07:08:44Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.22.0-alpha.1\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.22.0-alpha.1\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.22.0-alpha.1\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-alpha.1\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-alpha.1\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.22.0-alpha.1\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1835\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6c90115a-5541-4c35-bb46-d8c6c48e9626\",\n                \"resourceVersion\": \"518\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-29T07:08:44Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"networking.cilium.io\",\n                    \"addon.kops.k8s.io/version\": \"1.9.4-kops.1\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"io.cilium/app\": \"operator\",\n                    \"name\": \"cilium-operator\",\n                    \"role.kubernetes.io/networking\": \"1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"networking.cilium.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.9.4-kops.1\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"io.cilium/app\\\":\\\"operator\\\",\\\"name\\\":\\\"cilium-operator\\\",\\\"role.kubernetes.io/networking\\\":\\\"1\\\"},\\\"name\\\":\\\"cilium-operator\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"io.cilium/app\\\":\\\"operator\\\",\\\"name\\\":\\\"cilium-operator\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":1,\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"io.cilium/app\\\":\\\"operator\\\",\\\"name\\\":\\\"cilium-operator\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"args\\\":[\\\"--config-dir=/tmp/cilium/config-map\\\",\\\"--debug=$(CILIUM_DEBUG)\\\",\\\"--eni-tags=KubernetesCluster=e2e-459b123097-cb70c.test-cncf-aws.k8s.io\\\"],\\\"command\\\":[\\\"cilium-operator\\\"],\\\"env\\\":[{\\\"name\\\":\\\"K8S_NODE_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"spec.nodeName\\\"}}},{\\\"name\\\":\\\"CILIUM_K8S_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}},{\\\"name\\\":\\\"CILIUM_DEBUG\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"debug\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\\\"},{\\\"name\\\":\\\"KUBERNETES_SERVICE_PORT\\\",\\\"value\\\":\\\"443\\\"}],\\\"image\\\":\\\"docker.io/cilium/operator:v1.9.7\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"httpGet\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":9234,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"periodSeconds\\\":10,\\\"timeoutSeconds\\\":3},\\\"name\\\":\\\"cilium-operator\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"25m\\\",\\\"memory\\\":\\\"128Mi\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/cilium/config-map\\\",\\\"name\\\":\\\"cilium-config-path\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"restartPolicy\\\":\\\"Always\\\",\\\"serviceAccount\\\":\\\"cilium-operator\\\",\\\"serviceAccountName\\\":\\\"cilium-operator\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"cilium-config\\\"},\\\"name\\\":\\\"cilium-config-path\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"io.cilium/app\": \"operator\",\n                        \"name\": \"cilium-operator\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"io.cilium/app\": \"operator\",\n                            \"name\": \"cilium-operator\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"configMap\": {\n                                    \"name\": \"cilium-config\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"cilium-operator\",\n                                \"image\": \"docker.io/cilium/operator:v1.9.7\",\n                                \"command\": [\n                                    \"cilium-operator\"\n                                ],\n                                \"args\": [\n                                    \"--config-dir=/tmp/cilium/config-map\",\n                                    \"--debug=$(CILIUM_DEBUG)\",\n                                    \"--eni-tags=KubernetesCluster=e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"K8S_NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_DEBUG\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"debug\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                        \"value\": \"443\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"25m\",\n                                        \"memory\": \"128Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cilium-config-path\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/tmp/cilium/config-map\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9234,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 3,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"cilium-operator\",\n                        \"serviceAccount\": \"cilium-operator\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": 1\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-29T07:08:57Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-29T07:09:32Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"cilium-operator-7cd4557b96\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0c94a181-2742-4032-916f-d98aeb6fb65e\",\n                \"resourceVersion\": \"930\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-05-29T07:08:45Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"coredns/coredns:1.8.3\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"beta.kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"coredns/coredns:1.8.3\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-29T07:08:57Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-29T07:10:49Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-f45c4bf76\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1c1b302a-c00d-4629-ae8d-c1f5fd3bcc5c\",\n                \"resourceVersion\": \"893\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-29T07:08:45Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.8.3-kops.3\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.8.3-kops.3\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-29T07:10:56Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:10:56Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-29T07:10:56Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-6f594f4c58\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fa30bbf6-8fb4-423f-b16f-5216e33b73eb\",\n                \"resourceVersion\": \"511\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-29T07:08:47Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"addon.kops.k8s.io/version\": \"1.22.0-alpha.1\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.22.0-alpha.1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"addon.kops.k8s.io/version\\\":\\\"1.22.0-alpha.1\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-alpha.1\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-alpha.1\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.22.0-alpha.1\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-29T07:09:30Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:09:30Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-05-29T07:09:30Z\",\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-5f98b58844\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1842\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-7cd4557b96\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dff7677a-5be4-43cb-822e-1f7ff1770ff9\",\n                \"resourceVersion\": \"517\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"io.cilium/app\": \"operator\",\n                    \"name\": \"cilium-operator\",\n                    \"pod-template-hash\": \"7cd4557b96\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"cilium-operator\",\n                        \"uid\": \"6c90115a-5541-4c35-bb46-d8c6c48e9626\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"io.cilium/app\": \"operator\",\n                        \"name\": \"cilium-operator\",\n                        \"pod-template-hash\": \"7cd4557b96\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"io.cilium/app\": \"operator\",\n                            \"name\": \"cilium-operator\",\n                            \"pod-template-hash\": \"7cd4557b96\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"configMap\": {\n                                    \"name\": \"cilium-config\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"cilium-operator\",\n                                \"image\": \"docker.io/cilium/operator:v1.9.7\",\n                                \"command\": [\n                                    \"cilium-operator\"\n                                ],\n                                \"args\": [\n                                    \"--config-dir=/tmp/cilium/config-map\",\n                                    \"--debug=$(CILIUM_DEBUG)\",\n                                    \"--eni-tags=KubernetesCluster=e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"K8S_NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_DEBUG\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"debug\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                        \"value\": \"443\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"25m\",\n                                        \"memory\": \"128Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cilium-config-path\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/tmp/cilium/config-map\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9234,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 3,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"cilium-operator\",\n                        \"serviceAccount\": \"cilium-operator\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dc9609e5-a628-4239-80da-55dba3177857\",\n                \"resourceVersion\": \"892\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"6f594f4c58\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"1c1b302a-c00d-4629-ae8d-c1f5fd3bcc5c\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"6f594f4c58\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"6f594f4c58\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e95d5259-ec04-4bc4-944c-d69065836962\",\n                \"resourceVersion\": \"927\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"0c94a181-2742-4032-916f-d98aeb6fb65e\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"f45c4bf76\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"f45c4bf76\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"coredns/coredns:1.8.3\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5f98b58844\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"648b2167-6e11-4175-9631-1a77bd455ffe\",\n                \"resourceVersion\": \"510\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"5f98b58844\",\n                    \"version\": \"v1.22.0-alpha.1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"fa30bbf6-8fb4-423f-b16f-5216e33b73eb\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"5f98b58844\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"5f98b58844\",\n                            \"version\": \"v1.22.0-alpha.1\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"1850\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-8q8dv\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7c712b38-284f-4396-9c5f-7e3766caf358\",\n                \"resourceVersion\": \"820\",\n                \"creationTimestamp\": \"2021-05-29T07:10:12Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"d8b94ddb6\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-njv2z\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-njv2z\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_MASTER_DEVICE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-master-device\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_UNINSTALL_ON_EXIT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-uninstall-on-exit\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-njv2z\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 120,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-59-92.ap-southeast-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:30Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:44Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:44Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:12Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.59.92\",\n                \"podIP\": \"172.20.59.92\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.59.92\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:10:13Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-29T07:10:29Z\",\n                                \"finishedAt\": \"2021-05-29T07:10:29Z\",\n                                \"containerID\": \"containerd://bac19648aa32013c4015177ce2354905045bd0fbc485275d3326e9192fb5372b\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://bac19648aa32013c4015177ce2354905045bd0fbc485275d3326e9192fb5372b\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:10:30Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://e970a395793ba62764f5e073d0ad0c1b042d66be93f2fd0889cc70937e6de198\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-9h72w\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d1d8e66a-1101-4467-aec9-332758db0cbb\",\n                \"resourceVersion\": \"822\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"d8b94ddb6\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-9v7mn\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-9v7mn\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_MASTER_DEVICE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-master-device\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_UNINSTALL_ON_EXIT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-uninstall-on-exit\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-9v7mn\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 120,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:42Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:44Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:44Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:09:14Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-29T07:09:41Z\",\n                                \"finishedAt\": \"2021-05-29T07:09:41Z\",\n                                \"containerID\": \"containerd://77c44f95aad8aed7cb2f2a982b4c1f739ef380eaef8b91b925dcbb247ade3b6d\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://77c44f95aad8aed7cb2f2a982b4c1f739ef380eaef8b91b925dcbb247ade3b6d\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:09:42Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://c6171fe18ea0d0c9101a940b65f1256aefad305d2eef6baf5ec3b776d334d87d\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-7cd4557b96-jxr8n\",\n                \"generateName\": \"cilium-operator-7cd4557b96-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4ab7253e-855c-4690-80ef-0ad8bc8f92ea\",\n                \"resourceVersion\": \"516\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"io.cilium/app\": \"operator\",\n                    \"name\": \"cilium-operator\",\n                    \"pod-template-hash\": \"7cd4557b96\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"cilium-operator-7cd4557b96\",\n                        \"uid\": \"dff7677a-5be4-43cb-822e-1f7ff1770ff9\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-q2vbt\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-operator\",\n                        \"image\": \"docker.io/cilium/operator:v1.9.7\",\n                        \"command\": [\n                            \"cilium-operator\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\",\n                            \"--debug=$(CILIUM_DEBUG)\",\n                            \"--eni-tags=KubernetesCluster=e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_DEBUG\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"debug\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-q2vbt\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9234,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 3,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"cilium-operator\",\n                \"serviceAccount\": \"cilium-operator\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:14Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:32Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:32Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:09:14Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-operator\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:09:32Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/operator:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/operator@sha256:151834edf9bf52729719ae50f3465a4a512f22e6eb5de84de8499ca19ca571b0\",\n                        \"containerID\": \"containerd://0eec797c44ede54dbd845c002b9d018b6ecf4bf1aa02143c7db34d6ffd6f21bb\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-ph5rg\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8f0b9dd3-041e-481a-b00f-3b80fc50c208\",\n                \"resourceVersion\": \"870\",\n                \"creationTimestamp\": \"2021-05-29T07:10:13Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"d8b94ddb6\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-xdzs6\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-xdzs6\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_MASTER_DEVICE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-master-device\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_UNINSTALL_ON_EXIT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-uninstall-on-exit\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-xdzs6\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 120,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-56-44.ap-southeast-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:30Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:54Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:54Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:13Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.44\",\n                \"podIP\": \"172.20.56.44\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.56.44\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:10:14Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-29T07:10:29Z\",\n                                \"finishedAt\": \"2021-05-29T07:10:29Z\",\n                                \"containerID\": \"containerd://e6421bd3ae51e722e1ce84a3defc818df6fe4fe8ed884bae3058f444b85fb31f\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://e6421bd3ae51e722e1ce84a3defc818df6fe4fe8ed884bae3058f444b85fb31f\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:10:30Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://fbe4ec1e6af2e2462378a9fd90fa1fe30066a773a73894bb9e54fcff6f60b658\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wth8r\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3f7cdda8-0277-491f-9013-9b6efe22e89d\",\n                \"resourceVersion\": \"898\",\n                \"creationTimestamp\": \"2021-05-29T07:10:13Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"d8b94ddb6\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-2spdt\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-2spdt\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_MASTER_DEVICE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-master-device\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_UNINSTALL_ON_EXIT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-uninstall-on-exit\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-2spdt\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 120,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-54-213.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-54-213.ap-southeast-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:30Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:57Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:57Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:13Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.54.213\",\n                \"podIP\": \"172.20.54.213\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.54.213\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:10:14Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-29T07:10:29Z\",\n                                \"finishedAt\": \"2021-05-29T07:10:29Z\",\n                                \"containerID\": \"containerd://b0b848e1fbec574a285cabdbc0c7de2ffb83177ff411aa2c8f783d758c94d676\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://b0b848e1fbec574a285cabdbc0c7de2ffb83177ff411aa2c8f783d758c94d676\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:10:30Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://a5ee13c6d76b2202b3f7ac9983438afc06b8d556daad5ad0ef8eb7fd38f1cb03\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wzxqf\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"006296a0-1cf6-4628-8a60-61214cb31c67\",\n                \"resourceVersion\": \"976\",\n                \"creationTimestamp\": \"2021-05-29T07:10:22Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"d8b94ddb6\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"6070be26-bfff-4ff0-8619-33a3ed28ac99\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-jjnds\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-jjnds\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_MASTER_DEVICE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-master-device\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_FLANNEL_UNINSTALL_ON_EXIT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"flannel-uninstall-on-exit\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-jjnds\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 120,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-61-32.ap-southeast-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:40Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:11:18Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:11:18Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:22Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.61.32\",\n                \"podIP\": \"172.20.61.32\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.61.32\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:10:23Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-05-29T07:10:39Z\",\n                                \"finishedAt\": \"2021-05-29T07:10:39Z\",\n                                \"containerID\": \"containerd://2d96955e57fb2fde9d4dfca8c60f07887229d4ae30cd7975e82c6fedb209fd28\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://2d96955e57fb2fde9d4dfca8c60f07887229d4ae30cd7975e82c6fedb209fd28\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:10:40Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/cilium/cilium:v1.9.7\",\n                        \"imageID\": \"docker.io/cilium/cilium@sha256:fe81537bc5df109e85f7f14487750c73fa1d802c72654a9bf392f1700d5ef512\",\n                        \"containerID\": \"containerd://1261c00fb7a53874cafd1c7677293a23d25ce175a475c663b25c7d805b3165e1\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-6f594f4c58-kb772\",\n                \"generateName\": \"coredns-autoscaler-6f594f4c58-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cd75600d-6b2e-4a4c-99f3-a2a0d82594b6\",\n                \"resourceVersion\": \"891\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"6f594f4c58\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-autoscaler-6f594f4c58\",\n                        \"uid\": \"dc9609e5-a628-4239-80da-55dba3177857\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-2qmkh\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                        \"command\": [\n                            \"/cluster-proportional-autoscaler\",\n                            \"--namespace=kube-system\",\n                            \"--configmap=coredns-autoscaler\",\n                            \"--target=Deployment/coredns\",\n                            \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                            \"--logtostderr=true\",\n                            \"--v=2\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"20m\",\n                                \"memory\": \"10Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-2qmkh\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"coredns-autoscaler\",\n                \"serviceAccount\": \"coredns-autoscaler\",\n                \"nodeName\": \"ip-172-20-61-32.ap-southeast-1.compute.internal\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:43Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:56Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:56Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:43Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.61.32\",\n                \"podIP\": \"100.96.4.8\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.4.8\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:10:43Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"autoscaler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:10:55Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3\",\n                        \"imageID\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7\",\n                        \"containerID\": \"containerd://5de724c4b4abac974871ab6e481f1da049e6c40d18ef318b0b73649e87cadab5\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-5xwkz\",\n                \"generateName\": \"coredns-f45c4bf76-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"44f2f8cb-ca2b-4a5d-a234-35ee5d2a5ece\",\n                \"resourceVersion\": \"838\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-f45c4bf76\",\n                        \"uid\": \"e95d5259-ec04-4bc4-944c-d69065836962\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-5sll9\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-5sll9\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-59-92.ap-southeast-1.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:34Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:49Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:49Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.59.92\",\n                \"podIP\": \"100.96.1.248\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.1.248\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:10:34Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:10:48Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/coredns/coredns:1.8.3\",\n                        \"imageID\": \"docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                        \"containerID\": \"containerd://63881df74efb10e97016266c4b03d16c8e8fbb127c9b3cf9f19f80d32547ca75\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-f45c4bf76-hwsqm\",\n                \"generateName\": \"coredns-f45c4bf76-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1eb9b022-5c7c-4c63-b30b-259f19043328\",\n                \"resourceVersion\": \"925\",\n                \"creationTimestamp\": \"2021-05-29T07:10:55Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"f45c4bf76\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-f45c4bf76\",\n                        \"uid\": \"e95d5259-ec04-4bc4-944c-d69065836962\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-f84j4\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"coredns/coredns:1.8.3\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-f84j4\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"ip-172-20-56-44.ap-southeast-1.compute.internal\",\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"podAntiAffinity\": {\n                        \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"weight\": 100,\n                                \"podAffinityTerm\": {\n                                    \"labelSelector\": {\n                                        \"matchExpressions\": [\n                                            {\n                                                \"key\": \"k8s-app\",\n                                                \"operator\": \"In\",\n                                                \"values\": [\n                                                    \"kube-dns\"\n                                                ]\n                                            }\n                                        ]\n                                    },\n                                    \"topologyKey\": \"kubernetes.io/hostname\"\n                                }\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:55Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:11:03Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:11:03Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:10:55Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.56.44\",\n                \"podIP\": \"100.96.3.109\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"100.96.3.109\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:10:55Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:11:02Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/coredns/coredns:1.8.3\",\n                        \"imageID\": \"docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5\",\n                        \"containerID\": \"containerd://d418e56ac3f4b09bf84f24d6223a883fc0773f8cf840edb94f5a90c89a220d36\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-5f98b58844-2t8db\",\n                \"generateName\": \"dns-controller-5f98b58844-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f5b550bc-8ee0-40a0-8bb2-0bf80d4bd2f5\",\n                \"resourceVersion\": \"509\",\n                \"creationTimestamp\": \"2021-05-29T07:08:57Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"5f98b58844\",\n                    \"version\": \"v1.22.0-alpha.1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"dns-controller-5f98b58844\",\n                        \"uid\": \"648b2167-6e11-4175-9631-1a77bd455ffe\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-api-access-tjmhm\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\",\n                        \"command\": [\n                            \"/dns-controller\",\n                            \"--watch-ingress=false\",\n                            \"--dns=aws-route53\",\n                            \"--zone=*/ZEMLNXIIWQ0RV\",\n                            \"--zone=*/*\",\n                            \"-v=2\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-api-access-tjmhm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"dns-controller\",\n                \"serviceAccount\": \"dns-controller\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:14Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:30Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:30Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:09:14Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"dns-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:09:29Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.1\",\n                        \"imageID\": \"sha256:fd56fb87ef942cac6917dcaabeef4d9611fd3448b491ea6f69ac7c160b8ae2be\",\n                        \"containerID\": \"containerd://8f0428077d83ea92cc71b1b1f785297f247105324f71fda6c2ddc2d32dd5d47e\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"404df692-e194-43b4-83ac-50ddf648f46a\",\n                \"resourceVersion\": \"528\",\n                \"creationTimestamp\": \"2021-05-29T07:09:30Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-events\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"cca7e3b1e6ce0b382773e48432e2c8a9\",\n                    \"kubernetes.io/config.mirror\": \"cca7e3b1e6ce0b382773e48432e2c8a9\",\n                    \"kubernetes.io/config.seen\": \"2021-05-29T07:07:47.684638600Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                        \"uid\": \"11e47d04-161d-4157-b541-0dcf395748fd\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-events\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd-events.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events --client-urls=https://__name__:4002 --cluster-name=etcd-events --containerized=true --dns-suffix=.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io --etcd-insecure=false --grpc-port=3997 --insecure=false --peer-urls=https://__name__:2381 --quarantine-client-urls=https://__name__:3995 --v=6 --volume-name-tag=k8s.io/etcd/events --volume-provider=aws --volume-tag=k8s.io/etcd/events --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-459b123097-cb70c.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:13Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:13Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:07:48Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:08:13Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"imageID\": \"k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                        \"containerID\": \"containerd://284f941ed85fee6904bd4bfc01953cf9268c901f0fd2f38821f3e06e97a1065d\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3ef12197-3eb7-4e31-8387-232c9477c4e0\",\n                \"resourceVersion\": \"545\",\n                \"creationTimestamp\": \"2021-05-29T07:09:46Z\",\n                \"labels\": {\n                    \"k8s-app\": \"etcd-manager-main\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"47513f413ddb883da8fedfe162665fc4\",\n                    \"kubernetes.io/config.mirror\": \"47513f413ddb883da8fedfe162665fc4\",\n                    \"kubernetes.io/config.seen\": \"2021-05-29T07:07:47.684654620Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                        \"uid\": \"11e47d04-161d-4157-b541-0dcf395748fd\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"rootfs\",\n                        \"hostPath\": {\n                            \"path\": \"/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"run\",\n                        \"hostPath\": {\n                            \"path\": \"/run\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd-manager-main\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlogetcd\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/etcd.log\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"command\": [\n                            \"/bin/sh\",\n                            \"-c\",\n                            \"mkfifo /tmp/pipe; (tee -a /var/log/etcd.log \\u003c /tmp/pipe \\u0026 ) ; exec /etcd-manager --backup-store=s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/main --client-urls=https://__name__:4001 --cluster-name=etcd --containerized=true --dns-suffix=.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io --etcd-insecure=false --grpc-port=3996 --insecure=false --peer-urls=https://__name__:2380 --quarantine-client-urls=https://__name__:3994 --v=6 --volume-name-tag=k8s.io/etcd/main --volume-provider=aws --volume-tag=k8s.io/etcd/main --volume-tag=k8s.io/role/master=1 --volume-tag=kubernetes.io/cluster/e2e-459b123097-cb70c.test-cncf-aws.k8s.io=owned \\u003e /tmp/pipe 2\\u003e\\u00261\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"rootfs\",\n                                \"mountPath\": \"/rootfs\"\n                            },\n                            {\n                                \"name\": \"run\",\n                                \"mountPath\": \"/run\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd-manager\"\n                            },\n                            {\n                                \"name\": \"varlogetcd\",\n                                \"mountPath\": \"/var/log/etcd.log\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"hostPID\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:07:48Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:08:14Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430\",\n                        \"imageID\": \"k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e\",\n                        \"containerID\": \"containerd://c715d57c02f34831084cb60be46029058f1528e6472b7e86ae2af8be414b1324\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-hqjvz\",\n                \"generateName\": \"kops-controller-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"76a323e2-8007-4d93-b828-6526380ab2c1\",\n                \"resourceVersion\": \"560\",\n                \"creationTimestamp\": \"2021-05-29T07:09:50Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"c56588f9f\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"pod-template-generation\": \"1\",\n                    \"version\": \"v1.22.0-alpha.1\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kops-controller\",\n                        \"uid\": \"179e8ba4-1f3f-4f62-8628-6194c8dfab00\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kops-controller-config\",\n                        \"configMap\": {\n                            \"name\": \"kops-controller\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kops-controller-pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kops-controller/\",\n                            \"type\": \"Directory\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-scgfb\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1\",\n                        \"command\": [\n                            \"/kops-controller\",\n                            \"--v=2\",\n                            \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"127.0.0.1\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"50m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-scgfb\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"runAsNonRoot\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"kops-controller\",\n                \"serviceAccount\": \"kops-controller\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-36-217.ap-southeast-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:50Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:51Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:51Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:09:50Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:09:50Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kops-controller\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:09:50Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.1\",\n                        \"imageID\": \"sha256:61470ce376029a86ce008eebb2cfb0ca1c41283e97f8edc39549e0f2e7edb273\",\n                        \"containerID\": \"containerd://fd7cfe1afae043dc21fd618f0d4980d9fec5c32807a333cd43f19842d1c4e120\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"feaaba8c-0d5d-41a5-b817-c38bbf22345f\",\n                \"resourceVersion\": \"546\",\n                \"creationTimestamp\": \"2021-05-29T07:09:45Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-apiserver\"\n                },\n                \"annotations\": {\n                    \"dns.alpha.kubernetes.io/external\": \"api.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\",\n                    \"dns.alpha.kubernetes.io/internal\": \"api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\",\n                    \"kubernetes.io/config.hash\": \"f2fe32d4a75cc1f5fa1a3f919a3e3b23\",\n                    \"kubernetes.io/config.mirror\": \"f2fe32d4a75cc1f5fa1a3f919a3e3b23\",\n                    \"kubernetes.io/config.seen\": \"2021-05-29T07:07:47.684656202Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                        \"uid\": \"11e47d04-161d-4157-b541-0dcf395748fd\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-apiserver.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"pki\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/kube-apiserver\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkube\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvsshproxy\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/sshproxy\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"healthcheck-secrets\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/kube-apiserver-healthcheck/secrets\",\n                            \"type\": \"Directory\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-apiserver\"\n                        ],\n                        \"args\": [\n                            \"--allow-privileged=true\",\n                            \"--anonymous-auth=false\",\n                            \"--api-audiences=kubernetes.svc.default\",\n                            \"--apiserver-count=1\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--bind-address=0.0.0.0\",\n                            \"--client-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota\",\n                            \"--etcd-cafile=/etc/kubernetes/pki/kube-apiserver/etcd-ca.crt\",\n                            \"--etcd-certfile=/etc/kubernetes/pki/kube-apiserver/etcd-client.crt\",\n                            \"--etcd-keyfile=/etc/kubernetes/pki/kube-apiserver/etcd-client.key\",\n                            \"--etcd-servers-overrides=/events#https://127.0.0.1:4002\",\n                            \"--etcd-servers=https://127.0.0.1:4001\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/srv/kubernetes/kubelet-api.crt\",\n                            \"--kubelet-client-key=/srv/kubernetes/kubelet-api.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP\",\n                            \"--proxy-client-cert-file=/srv/kubernetes/apiserver-aggregator.crt\",\n                            \"--proxy-client-key-file=/srv/kubernetes/apiserver-aggregator.key\",\n                            \"--requestheader-allowed-names=aggregator\",\n                            \"--requestheader-client-ca-file=/srv/kubernetes/apiserver-aggregator-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=443\",\n                            \"--service-account-issuer=https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\",\n                            \"--service-account-jwks-uri=https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io/openid/v1/jwks\",\n                            \"--service-account-key-file=/srv/kubernetes/service-account.key\",\n                            \"--service-account-signing-key-file=/srv/kubernetes/service-account.key\",\n                            \"--service-cluster-ip-range=100.64.0.0/13\",\n                            \"--storage-backend=etcd3\",\n                            \"--tls-cert-file=/srv/kubernetes/server.crt\",\n                            \"--tls-private-key-file=/srv/kubernetes/server.key\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-apiserver.log\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"https\",\n                                \"hostPort\": 443,\n                                \"containerPort\": 443,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"150m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-apiserver.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"pki\",\n                                \"mountPath\": \"/etc/kubernetes/pki/kube-apiserver\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"srvkube\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes\"\n                            },\n                            {\n                                \"name\": \"srvsshproxy\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/sshproxy\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 45,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    },\n                    {\n                        \"name\": \"healthcheck\",\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.1\",\n                        \"command\": [\n                            \"/kube-apiserver-healthcheck\"\n                        ],\n                        \"args\": [\n                            \"--ca-cert=/secrets/ca.crt\",\n                            \"--client-cert=/secrets/client.crt\",\n                            \"--client-key=/secrets/client.key\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"healthcheck-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/secrets\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/.kube-apiserver-healthcheck/healthz\",\n                                \"port\": 3990,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:27Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:27Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:07:48Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"healthcheck\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:08:06Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.1\",\n                        \"imageID\": \"sha256:c7a4d1f680d20eac4a154ac1e2ca6d436f21dded3b95d446727164fc708fedcc\",\n                        \"containerID\": \"containerd://b3bb299eef3106f043f615992d0b994e300cd63987a4eec36922aac8772154bd\",\n                        \"started\": true\n                    },\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:08:26Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-05-29T07:08:05Z\",\n                                \"finishedAt\": \"2021-05-29T07:08:26Z\",\n                                \"containerID\": \"containerd://85b9d1031d5940f286f5ca27d0733b1adecc472407a93d42bda845a6188b5be6\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"k8s.gcr.io/kube-apiserver-amd64:v1.21.1\",\n                        \"imageID\": \"sha256:771ffcf9ca634e37cbd3202fd86bd7e2df48ecba4067d1992541bfa00e88a9bb\",\n                        \"containerID\": \"containerd://5302923e629027faf5127376fb66df0cd130e18cea5205ff67b28c09d7203f8b\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"89731fec-7518-400d-826a-5486aec96166\",\n                \"resourceVersion\": \"526\",\n                \"creationTimestamp\": \"2021-05-29T07:09:37Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-controller-manager\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"da789bd2fa0f396ff7b20d0b5b62759d\",\n                    \"kubernetes.io/config.mirror\": \"da789bd2fa0f396ff7b20d0b5b62759d\",\n                    \"kubernetes.io/config.seen\": \"2021-05-29T07:07:47.684657694Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                        \"uid\": \"11e47d04-161d-4157-b541-0dcf395748fd\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-controller-manager.log\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkitls\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/tls\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcpkica-trust\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/pki/ca-trust\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrsharessl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlibssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/lib/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"usrlocalopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varssl\",\n                        \"hostPath\": {\n                            \"path\": \"/var/ssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcopenssl\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/openssl\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"cloudconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/cloud.config\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"srvkube\",\n                        \"hostPath\": {\n                            \"path\": \"/srv/kubernetes\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"varlibkcm\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-controller-manager\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"volplugins\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-controller-manager\"\n                        ],\n                        \"args\": [\n                            \"--allocate-node-cidrs=true\",\n                            \"--attach-detach-reconcile-sync-period=1m0s\",\n                            \"--cloud-config=/etc/kubernetes/cloud.config\",\n                            \"--cloud-provider=aws\",\n                            \"--cluster-cidr=100.96.0.0/11\",\n                            \"--cluster-name=e2e-459b123097-cb70c.test-cncf-aws.k8s.io\",\n                            \"--cluster-signing-cert-file=/srv/kubernetes/ca.crt\",\n                            \"--cluster-signing-key-file=/srv/kubernetes/ca.key\",\n                            \"--configure-cloud-routes=false\",\n                            \"--flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\",\n                            \"--kubeconfig=/var/lib/kube-controller-manager/kubeconfig\",\n                            \"--leader-elect=true\",\n                            \"--root-ca-file=/srv/kubernetes/ca.crt\",\n                            \"--service-account-private-key-file=/srv/kubernetes/service-account.key\",\n                            \"--use-service-account-credentials=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-controller-manager.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-controller-manager.log\"\n                            },\n                            {\n                                \"name\": \"etcssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl\"\n                            },\n                            {\n                                \"name\": \"etcpkitls\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/tls\"\n                            },\n                            {\n                                \"name\": \"etcpkica-trust\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/pki/ca-trust\"\n                            },\n                            {\n                                \"name\": \"usrsharessl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ssl\"\n                            },\n                            {\n                                \"name\": \"usrssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlibssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/lib/ssl\"\n                            },\n                            {\n                                \"name\": \"usrlocalopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/openssl\"\n                            },\n                            {\n                                \"name\": \"varssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/ssl\"\n                            },\n                            {\n                                \"name\": \"etcopenssl\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/openssl\"\n                            },\n                            {\n                                \"name\": \"cloudconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/cloud.config\"\n                            },\n                            {\n                                \"name\": \"srvkube\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/srv/kubernetes\"\n                            },\n                            {\n                                \"name\": \"varlibkcm\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-controller-manager\"\n                            },\n                            {\n                                \"name\": \"volplugins\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:14Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:14Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:07:48Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:08:14Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.1\",\n                        \"imageID\": \"k8s.gcr.io/kube-controller-manager-amd64@sha256:e0d7e62864b91b05f02e51ce0ecf8c986270eedb2d1512edff0d43b8660a442e\",\n                        \"containerID\": \"containerd://b43b24751b688897d0ac680e1130b1013769e51bf5c66b1522ddc81553d24673\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"19704c58-8286-4261-90fb-0a598ae0990b\",\n                \"resourceVersion\": \"527\",\n                \"creationTimestamp\": \"2021-05-29T07:09:29Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-scheduler\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"c0ae5c27345cf6526df59af467bd2217\",\n                    \"kubernetes.io/config.mirror\": \"c0ae5c27345cf6526df59af467bd2217\",\n                    \"kubernetes.io/config.seen\": \"2021-05-29T07:07:47.684658820Z\",\n                    \"kubernetes.io/config.source\": \"file\",\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                        \"uid\": \"11e47d04-161d-4157-b541-0dcf395748fd\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"varlibkubescheduler\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/kube-scheduler\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"logfile\",\n                        \"hostPath\": {\n                            \"path\": \"/var/log/kube-scheduler.log\",\n                            \"type\": \"\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-scheduler\"\n                        ],\n                        \"args\": [\n                            \"--config=/var/lib/kube-scheduler/config.yaml\",\n                            \"--leader-elect=true\",\n                            \"--v=2\",\n                            \"--logtostderr=false\",\n                            \"--alsologtostderr\",\n                            \"--log-file=/var/log/kube-scheduler.log\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"varlibkubescheduler\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/kube-scheduler\"\n                            },\n                            {\n                                \"name\": \"logfile\",\n                                \"mountPath\": \"/var/log/kube-scheduler.log\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10251,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"ip-172-20-36-217.ap-southeast-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:08:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-05-29T07:07:48Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.36.217\",\n                \"podIP\": \"172.20.36.217\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.36.217\"\n                    }\n                ],\n                \"startTime\": \"2021-05-29T07:07:48Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-05-29T07:08:05Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler-amd64:v1.21.1\",\n                        \"imageID\": \"sha256:a4183b88f6e65972c4b176b43ca59de31868635a7e43805f4c6e26203de1742f\",\n                        \"containerID\": \"containerd://a72ca335f9b00ff56e0e88e9cfd046d69784ed18f2804fb49d94a41e671a6a4d\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container clean-cilium-state of pod kube-system/cilium-8q8dv ====\n==== END logs for container clean-cilium-state of pod kube-system/cilium-8q8dv ====\n==== START logs for container cilium-agent of pod kube-system/cilium-8q8dv ====\nlevel=info msg=\"Skipped reading configuration file\" reason=\"Config File \\\"ciliumd\\\" Not Found in \\\"[/root]\\\"\" subsys=config\nlevel=info msg=\"Started gops server\" address=\"127.0.0.1:9890\" subsys=daemon\nlevel=info msg=\"  --agent-health-port='9876'\" subsys=daemon\nlevel=info msg=\"  --agent-labels=''\" subsys=daemon\nlevel=info msg=\"  --allow-icmp-frag-needed='true'\" subsys=daemon\nlevel=info msg=\"  --allow-localhost='auto'\" subsys=daemon\nlevel=info msg=\"  --annotate-k8s-node='true'\" subsys=daemon\nlevel=info msg=\"  --api-rate-limit='map[]'\" subsys=daemon\nlevel=info msg=\"  --arping-refresh-period='5m0s'\" subsys=daemon\nlevel=info msg=\"  --auto-create-cilium-node-resource='true'\" subsys=daemon\nlevel=info msg=\"  --auto-direct-node-routes='false'\" subsys=daemon\nlevel=info msg=\"  --blacklist-conflicting-routes='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-compile-debug='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-any-max='262144'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-tcp-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-fin='10s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-syn='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-fragments-map-max='8192'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-table-size='16381'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-map-max='65536'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --bpf-map-dynamic-size-ratio='0'\" subsys=daemon\nlevel=info msg=\"  --bpf-nat-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-neigh-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-policy-map-max='16384'\" subsys=daemon\nlevel=info msg=\"  --bpf-root=''\" subsys=daemon\nlevel=info msg=\"  --bpf-sock-rev-map-max='262144'\" subsys=daemon\nlevel=info msg=\"  --certificates-directory='/var/run/cilium/certs'\" subsys=daemon\nlevel=info msg=\"  --cgroup-root=''\" subsys=daemon\nlevel=info msg=\"  --cluster-id='0'\" subsys=daemon\nlevel=info msg=\"  --cluster-name='default'\" subsys=daemon\nlevel=info msg=\"  --clustermesh-config='/var/lib/cilium/clustermesh/'\" subsys=daemon\nlevel=info msg=\"  --cmdref=''\" subsys=daemon\nlevel=info msg=\"  --config=''\" subsys=daemon\nlevel=info msg=\"  --config-dir='/tmp/cilium/config-map'\" subsys=daemon\nlevel=info msg=\"  --conntrack-gc-interval='0s'\" subsys=daemon\nlevel=info msg=\"  --crd-wait-timeout='5m0s'\" subsys=daemon\nlevel=info msg=\"  --datapath-mode='veth'\" subsys=daemon\nlevel=info msg=\"  --debug='false'\" subsys=daemon\nlevel=info msg=\"  --debug-verbose=''\" subsys=daemon\nlevel=info msg=\"  --device=''\" subsys=daemon\nlevel=info msg=\"  --devices=''\" subsys=daemon\nlevel=info msg=\"  --direct-routing-device=''\" subsys=daemon\nlevel=info msg=\"  --disable-cnp-status-updates='false'\" subsys=daemon\nlevel=info msg=\"  --disable-conntrack='false'\" subsys=daemon\nlevel=info msg=\"  --disable-endpoint-crd='false'\" subsys=daemon\nlevel=info msg=\"  --disable-envoy-version-check='false'\" subsys=daemon\nlevel=info msg=\"  --disable-iptables-feeder-rules=''\" subsys=daemon\nlevel=info msg=\"  --dns-max-ips-per-restored-rule='1000'\" subsys=daemon\nlevel=info msg=\"  --egress-masquerade-interfaces=''\" subsys=daemon\nlevel=info msg=\"  --egress-multi-home-ip-rule-compat='false'\" subsys=daemon\nlevel=info msg=\"  --enable-auto-protect-node-port-range='true'\" subsys=daemon\nlevel=info msg=\"  --enable-bandwidth-manager='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-clock-probe='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-masquerade='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-tproxy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-routes='false'\" subsys=daemon\nlevel=info msg=\"  --enable-external-ips='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-check-nodeport='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-firewall='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-legacy-routing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-reachable-services='false'\" subsys=daemon\nlevel=info msg=\"  --enable-hubble='false'\" subsys=daemon\nlevel=info msg=\"  --enable-identity-mark='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ip-masq-agent='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipsec='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4-fragment-tracking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6-ndp='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-api-discovery='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-endpoint-slice='true'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-event-handover='false'\" subsys=daemon\nlevel=info msg=\"  --enable-l7-proxy='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-node-route='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-redirect-policy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-monitor='true'\" subsys=daemon\nlevel=info msg=\"  --enable-node-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-policy='default'\" subsys=daemon\nlevel=info msg=\"  --enable-remote-node-identity='true'\" subsys=daemon\nlevel=info msg=\"  --enable-selective-regeneration='true'\" subsys=daemon\nlevel=info msg=\"  --enable-session-affinity='false'\" subsys=daemon\nlevel=info msg=\"  --enable-svc-source-range-check='true'\" subsys=daemon\nlevel=info msg=\"  --enable-tracing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-well-known-identities='true'\" subsys=daemon\nlevel=info msg=\"  --enable-xt-socket-fallback='true'\" subsys=daemon\nlevel=info msg=\"  --encrypt-interface=''\" subsys=daemon\nlevel=info msg=\"  --encrypt-node='false'\" subsys=daemon\nlevel=info msg=\"  --endpoint-interface-name-prefix='lxc+'\" subsys=daemon\nlevel=info msg=\"  --endpoint-queue-size='25'\" subsys=daemon\nlevel=info msg=\"  --endpoint-status=''\" subsys=daemon\nlevel=info msg=\"  --envoy-log=''\" subsys=daemon\nlevel=info msg=\"  --exclude-local-address=''\" subsys=daemon\nlevel=info msg=\"  --fixed-identity-mapping='map[]'\" subsys=daemon\nlevel=info msg=\"  --flannel-master-device=''\" subsys=daemon\nlevel=info msg=\"  --flannel-uninstall-on-exit='false'\" subsys=daemon\nlevel=info msg=\"  --force-local-policy-eval-at-source='true'\" subsys=daemon\nlevel=info msg=\"  --gops-port='9890'\" subsys=daemon\nlevel=info msg=\"  --host-reachable-services-protos='tcp,udp'\" subsys=daemon\nlevel=info msg=\"  --http-403-msg=''\" subsys=daemon\nlevel=info msg=\"  --http-idle-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-max-grpc-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-normalize-path='true'\" subsys=daemon\nlevel=info msg=\"  --http-request-timeout='3600'\" subsys=daemon\nlevel=info msg=\"  --http-retry-count='3'\" subsys=daemon\nlevel=info msg=\"  --http-retry-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-disable-tls='false'\" subsys=daemon\nlevel=info msg=\"  --hubble-event-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-flow-buffer-size='4095'\" subsys=daemon\nlevel=info msg=\"  --hubble-listen-address=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics-server=''\" subsys=daemon\nlevel=info msg=\"  --hubble-socket-path='/var/run/cilium/hubble.sock'\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-cert-file=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-client-ca-files=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-key-file=''\" subsys=daemon\nlevel=info msg=\"  --identity-allocation-mode='crd'\" subsys=daemon\nlevel=info msg=\"  --identity-change-grace-period='5s'\" subsys=daemon\nlevel=info msg=\"  --install-iptables-rules='true'\" subsys=daemon\nlevel=info msg=\"  --ip-allocation-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'\" subsys=daemon\nlevel=info msg=\"  --ipam='kubernetes'\" subsys=daemon\nlevel=info msg=\"  --ipsec-key-file=''\" subsys=daemon\nlevel=info msg=\"  --iptables-lock-timeout='5s'\" subsys=daemon\nlevel=info msg=\"  --iptables-random-fully='false'\" subsys=daemon\nlevel=info msg=\"  --ipv4-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv4-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-loopback-address='169.254.42.1'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-cluster-alloc-cidr='f00d::/64'\" subsys=daemon\nlevel=info msg=\"  --ipv6-mcast-device=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipvlan-master-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --join-cluster='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-api-server=''\" subsys=daemon\nlevel=info msg=\"  --k8s-force-json-patch='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-heartbeat-timeout='30s'\" subsys=daemon\nlevel=info msg=\"  --k8s-kubeconfig-path=''\" subsys=daemon\nlevel=info msg=\"  --k8s-namespace='kube-system'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv4-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv6-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-cache-size='128'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-proxy-name=''\" subsys=daemon\nlevel=info msg=\"  --k8s-sync-timeout='3m0s'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-queue-size='1024'\" subsys=daemon\nlevel=info msg=\"  --keep-config='false'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement='strict'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement-healthz-bind-address=''\" subsys=daemon\nlevel=info msg=\"  --kvstore=''\" subsys=daemon\nlevel=info msg=\"  --kvstore-connectivity-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-lease-ttl='15m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --kvstore-periodic-sync='5m0s'\" subsys=daemon\nlevel=info msg=\"  --label-prefix-file=''\" subsys=daemon\nlevel=info msg=\"  --labels=''\" subsys=daemon\nlevel=info msg=\"  --lib-dir='/var/lib/cilium'\" subsys=daemon\nlevel=info msg=\"  --log-driver=''\" subsys=daemon\nlevel=info msg=\"  --log-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --log-system-load='false'\" subsys=daemon\nlevel=info msg=\"  --masquerade='true'\" subsys=daemon\nlevel=info msg=\"  --max-controller-interval='0'\" subsys=daemon\nlevel=info msg=\"  --metrics=''\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation='medium'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-flags='syn,fin,rst'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-interval='5s'\" subsys=daemon\nlevel=info msg=\"  --monitor-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --mtu='0'\" subsys=daemon\nlevel=info msg=\"  --nat46-range='0:0:0:0:0:FFFF::/96'\" subsys=daemon\nlevel=info msg=\"  --native-routing-cidr=''\" subsys=daemon\nlevel=info msg=\"  --node-port-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --node-port-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --node-port-bind-protection='true'\" subsys=daemon\nlevel=info msg=\"  --node-port-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --node-port-range='30000,32767'\" subsys=daemon\nlevel=info msg=\"  --policy-audit-mode='false'\" subsys=daemon\nlevel=info msg=\"  --policy-queue-size='100'\" subsys=daemon\nlevel=info msg=\"  --policy-trigger-interval='1s'\" subsys=daemon\nlevel=info msg=\"  --pprof='false'\" subsys=daemon\nlevel=info msg=\"  --preallocate-bpf-maps='false'\" subsys=daemon\nlevel=info msg=\"  --prefilter-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --prefilter-mode='native'\" subsys=daemon\nlevel=info msg=\"  --prepend-iptables-chains='true'\" subsys=daemon\nlevel=info msg=\"  --prometheus-serve-addr=''\" subsys=daemon\nlevel=info msg=\"  --proxy-connect-timeout='1'\" subsys=daemon\nlevel=info msg=\"  --proxy-prometheus-port='0'\" subsys=daemon\nlevel=info msg=\"  --read-cni-conf=''\" subsys=daemon\nlevel=info msg=\"  --restore='true'\" subsys=daemon\nlevel=info msg=\"  --sidecar-istio-proxy-image='cilium/istio_proxy'\" subsys=daemon\nlevel=info msg=\"  --single-cluster-route='false'\" subsys=daemon\nlevel=info msg=\"  --skip-crd-creation='false'\" subsys=daemon\nlevel=info msg=\"  --socket-path='/var/run/cilium/cilium.sock'\" subsys=daemon\nlevel=info msg=\"  --sockops-enable='false'\" subsys=daemon\nlevel=info msg=\"  --state-dir='/var/run/cilium'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-dns-reject-response-code='refused'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-enable-dns-compression='true'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-endpoint-max-ip-per-hostname='50'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-idle-connection-grace-period='0s'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-max-deferred-connection-deletes='10000'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-min-ttl='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-pre-cache=''\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-port='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-response-max-delay='100ms'\" subsys=daemon\nlevel=info msg=\"  --trace-payloadlen='128'\" subsys=daemon\nlevel=info msg=\"  --tunnel='vxlan'\" subsys=daemon\nlevel=info msg=\"  --version='false'\" subsys=daemon\nlevel=info msg=\"  --write-cni-conf-when-ready=''\" subsys=daemon\nlevel=info msg=\"     _ _ _\" subsys=daemon\nlevel=info msg=\" ___|_| |_|_ _ _____\" subsys=daemon\nlevel=info msg=\"|  _| | | | | |     |\" subsys=daemon\nlevel=info msg=\"|___|_|_|_|___|_|_|_|\" subsys=daemon\nlevel=info msg=\"Cilium 1.9.7 f993696 2021-05-12T18:21:30-07:00 go version go1.15.12 linux/amd64\" subsys=daemon\nlevel=info msg=\"cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL\" subsys=daemon\nlevel=info msg=\"clang (10.0.0) and kernel (4.19.0) versions: OK!\" subsys=linux-datapath\nlevel=info msg=\"linking environment: OK!\" subsys=linux-datapath\nlevel=info msg=\"Detected mounted BPF filesystem at /sys/fs/bpf\" subsys=bpf\nlevel=info msg=\"Parsing base label prefixes from default label list\" subsys=labels-filter\nlevel=info msg=\"Parsing additional label prefixes from user inputs: []\" subsys=labels-filter\nlevel=info msg=\"Final label prefixes to be used for identity evaluation:\" subsys=labels-filter\nlevel=info msg=\" - reserved:.*\" subsys=labels-filter\nlevel=info msg=\" - :io.kubernetes.pod.namespace\" subsys=labels-filter\nlevel=info msg=\" - :io.cilium.k8s.namespace.labels\" subsys=labels-filter\nlevel=info msg=\" - :app.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:io.kubernetes\" subsys=labels-filter\nlevel=info msg=\" - !:kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:.*beta.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:k8s.io\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-generation\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-hash\" subsys=labels-filter\nlevel=info msg=\" - !:controller-revision-hash\" subsys=labels-filter\nlevel=info msg=\" - !:annotation.*\" subsys=labels-filter\nlevel=info msg=\" - !:etcd_node\" subsys=labels-filter\nlevel=info msg=\"Using autogenerated IPv4 allocation range\" subsys=node v4Prefix=10.92.0.0/16\nlevel=info msg=\"Initializing daemon\" subsys=daemon\nlevel=info msg=\"Establishing connection to apiserver\" host=\"https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:443\" subsys=k8s\nlevel=info msg=\"Connected to apiserver\" subsys=k8s\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Trying to auto-enable \\\"enable-node-port\\\", \\\"enable-external-ips\\\", \\\"enable-host-reachable-services\\\", \\\"enable-host-port\\\", \\\"enable-session-affinity\\\" features\" subsys=daemon\nlevel=warning msg=\"Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.\" subsys=daemon\nlevel=info msg=\"BPF host routing is only available in native routing mode. Falling back to legacy host routing (enable-host-legacy-routing=true).\" subsys=daemon\nlevel=info msg=\"Inheriting MTU from external network interface\" device=ens5 ipAddr=172.20.59.92 mtu=9001 subsys=mtu\nlevel=info msg=\"Restored services from maps\" failed=0 restored=0 subsys=service\nlevel=info msg=\"Reading old endpoints...\" subsys=daemon\nlevel=info msg=\"Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock\" subsys=envoy-manager\nlevel=info msg=\"No old endpoints found.\" subsys=daemon\nlevel=error msg=\"Command execution failed\" cmd=\"[iptables -t mangle -n -L CILIUM_PRE_mangle]\" error=\"exit status 1\" subsys=iptables\nlevel=warning msg=\"# Warning: iptables-legacy tables present, use iptables-legacy to see them\" subsys=iptables\nlevel=warning msg=\"iptables: No chain/target/match by that name.\" subsys=iptables\nlevel=info msg=\"Waiting until all Cilium CRDs are available\" subsys=k8s\nlevel=info msg=\"All Cilium CRDs have been found and are available\" subsys=k8s\nlevel=info msg=\"Retrieved node information from kubernetes node\" nodeName=ip-172-20-59-92.ap-southeast-1.compute.internal subsys=k8s\nlevel=info msg=\"Received own node information from API server\" ipAddr.ipv4=172.20.59.92 ipAddr.ipv6=\"<nil>\" k8sNodeIP=172.20.59.92 labels=\"map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-59-92.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a]\" nodeName=ip-172-20-59-92.ap-southeast-1.compute.internal subsys=k8s v4Prefix=100.96.1.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"k8s mode: Allowing localhost to reach local endpoints\" subsys=daemon\nlevel=info msg=\"Using auto-derived devices for BPF node port\" devices=\"[ens5]\" directRoutingDevice=ens5 subsys=daemon\nlevel=info msg=\"Enabling k8s event listener\" subsys=k8s-watcher\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Removing stale endpoint interfaces\" subsys=daemon\nlevel=info msg=\"Skipping kvstore configuration\" subsys=daemon\nlevel=info msg=\"Initializing node addressing\" subsys=daemon\nlevel=info msg=\"Initializing kubernetes IPAM\" subsys=ipam v4Prefix=100.96.1.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"Restoring endpoints...\" subsys=daemon\nlevel=info msg=\"Endpoints restored\" failed=0 restored=0 subsys=daemon\nlevel=info msg=\"Addressing information:\" subsys=daemon\nlevel=info msg=\"  Cluster-Name: default\" subsys=daemon\nlevel=info msg=\"  Cluster-ID: 0\" subsys=daemon\nlevel=info msg=\"  Local node-name: ip-172-20-59-92.ap-southeast-1.compute.internal\" subsys=daemon\nlevel=info msg=\"  Node-IPv6: <nil>\" subsys=daemon\nlevel=info msg=\"  External-Node IPv4: 172.20.59.92\" subsys=daemon\nlevel=info msg=\"  Internal-Node IPv4: 100.96.1.250\" subsys=daemon\nlevel=info msg=\"  IPv4 allocation prefix: 100.96.1.0/24\" subsys=daemon\nlevel=info msg=\"  Loopback IPv4: 169.254.42.1\" subsys=daemon\nlevel=info msg=\"  Local IPv4 addresses:\" subsys=daemon\nlevel=info msg=\"  - 172.20.59.92\" subsys=daemon\nlevel=info msg=\"Creating or updating CiliumNode resource\" node=ip-172-20-59-92.ap-southeast-1.compute.internal subsys=nodediscovery\nlevel=info msg=\"Waiting until all pre-existing resources related to policy have been received\" subsys=k8s-watcher\nlevel=info msg=\"Adding local node to cluster\" node=\"{ip-172-20-59-92.ap-southeast-1.compute.internal default [{ExternalIP 54.169.50.147} {InternalIP 172.20.59.92} {CiliumInternalIP 100.96.1.250}] 100.96.1.0/24 <nil> 100.96.1.177 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-59-92.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] 6}\" subsys=nodediscovery\nlevel=info msg=\"Successfully created CiliumNode resource\" subsys=nodediscovery\nlevel=info msg=\"Annotating k8s node\" subsys=daemon v4CiliumHostIP.IPv4=100.96.1.250 v4Prefix=100.96.1.0/24 v4healthIP.IPv4=100.96.1.177 v6CiliumHostIP.IPv6=\"<nil>\" v6Prefix=\"<nil>\" v6healthIP.IPv6=\"<nil>\"\nlevel=info msg=\"Initializing identity allocator\" subsys=identity-cache\nlevel=info msg=\"Cluster-ID is not specified, skipping ClusterMesh initialization\" subsys=daemon\nlevel=info msg=\"Setting up BPF datapath\" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0\nlevel=info msg=\"All pre-existing resources related to policy have been received; continuing\" subsys=k8s-watcher\nlevel=info msg=\"Adding new proxy port rules for cilium-dns-egress:32943\" proxy port name=cilium-dns-egress subsys=proxy\nlevel=info msg=\"Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock\" subsys=monitor-agent\nlevel=info msg=\"Validating configured node address ranges\" subsys=daemon\nlevel=info msg=\"Starting connection tracking garbage collector\" subsys=daemon\nlevel=info msg=\"Starting IP identity watcher\" subsys=ipcache\nlevel=info msg=\"Initial scan of connection tracking completed\" subsys=ct-gc\nlevel=info msg=\"Datapath signal listener running\" subsys=signal\nlevel=info msg=\"Regenerating restored endpoints\" numRestored=0 subsys=daemon\nlevel=info msg=\"Creating host endpoint\" subsys=daemon\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=310 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=310 identityLabels=\"k8s:kops.k8s.io/instancegroup=nodes-ap-southeast-1a,k8s:node-role.kubernetes.io/node,k8s:node.kubernetes.io/instance-type=t3.medium,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=310 identity=1 identityLabels=\"k8s:kops.k8s.io/instancegroup=nodes-ap-southeast-1a,k8s:node-role.kubernetes.io/node,k8s:node.kubernetes.io/instance-type=t3.medium,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Launching Cilium health daemon\" subsys=daemon\nlevel=info msg=\"Finished regenerating restored endpoints\" regenerated=0 subsys=daemon total=0\nlevel=info msg=\"Launching Cilium health endpoint\" subsys=daemon\nlevel=info msg=\"Started healthz status API server\" address=\"127.0.0.1:9876\" subsys=daemon\nlevel=info msg=\"Initializing Cilium API\" subsys=daemon\nlevel=info msg=\"Daemon initialization completed\" bootstrapTime=8.655567247s subsys=daemon\nlevel=info msg=\"Serving cilium API at unix:///var/run/cilium/cilium.sock\" subsys=daemon\nlevel=info msg=\"Hubble server is disabled\" subsys=hubble\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=f91e26ef-c04c-11eb-a3dd-066a077e2d80\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=f91e26ef-c04c-11eb-a3dd-066a077e2d80 waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.1.248 f91de4a6-c04c-11eb-a3dd-066a077e2d80  }\" containerID=7ed6d500ccf166160224763e510d106080628f5cd9577e6fd40f102c89593961 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxcf3901d36fa2b k8sPodName=kube-system/coredns-f45c4bf76-5xwkz labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=465 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=465 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0,k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops,k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name:core.addons.k8s.io k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version:1.4.0 k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by:kops k8s:io.cilium.k8s.namespace.labels.k8s-addon:core.addons.k8s.io k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:kube-system]\" subsys=crd-allocator\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Invalid state transition skipped\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=465 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=544 subsys=endpoint\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io;k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0;k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops;k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=coredns;k8s:io.kubernetes.pod.namespace=kube-system;k8s:k8s-app=kube-dns;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=465 identity=14688 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0,k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops,k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=465 identity=14688 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=119 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=119 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=119 identity=4 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.705703508s file-path=/var/run/cilium/state/templates/cb0f89bc435faccd12c131f68e54bffc7cc5dc9c/bpf_host.o subsys=datapath-loader\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.765701223s file-path=/var/run/cilium/state/templates/1ad2a0783537b075c2f46b11f02fa092f9bcbbc7/bpf_lxc.o subsys=datapath-loader\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=465 identity=14688 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=465 identity=14688 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=2.139026178s subsys=rate totalDuration=2.139081105s uuid=f91e26ef-c04c-11eb-a3dd-066a077e2d80 waitDurationTotal=0s\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=119 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=310 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Serving cilium health API at unix:///var/run/cilium/health.sock\" subsys=health-server\nlevel=warning msg=\"Unable to update ipcache map entry on pod add\" error=\"ipcache entry for podIP 100.96.1.248 owned by kvstore or agent\" hostIP=100.96.1.248 k8sNamespace=kube-system k8sPodName=coredns-f45c4bf76-5xwkz podIP=100.96.1.248 podIPs=\"[{100.96.1.248}]\" subsys=k8s-watcher\nlevel=warning msg=\"Unable to update ipcache map entry on pod add\" error=\"ipcache entry for podIP 100.96.1.248 owned by kvstore or agent\" hostIP=100.96.1.248 k8sNamespace=kube-system k8sPodName=coredns-f45c4bf76-5xwkz podIP=100.96.1.248 podIPs=\"[{100.96.1.248}]\" subsys=k8s-watcher\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=3 rateLimiterSkipped=true subsys=rate uuid=7299e264-c04d-11eb-a3dd-066a077e2d80\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=3 rateLimiterSkipped=true subsys=rate uuid=7299e264-c04d-11eb-a3dd-066a077e2d80 waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.1.1 72929d15-c04d-11eb-a3dd-066a077e2d80  }\" containerID=e76f64cb430e4f00b9921725a4836e2303440a954a232ee87e4ac79f8676ac3b datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc7d6460c791e6 k8sPodName=services-3817/service-headless-qzchb labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1796 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1796 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services,k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=services-3817,k8s:name=service-headless\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:services k8s:io.cilium.k8s.namespace.labels.e2e-run:d740bb95-f430-47d7-935a-f5f0b65a850d k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:services-3817]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services;k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=services-3817;k8s:name=service-headless;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1796 identity=36581 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services,k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=services-3817,k8s:name=service-headless\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1796 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=3 rateLimiterSkipped=true subsys=rate uuid=72def104-c04d-11eb-a3dd-066a077e2d80\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=3 rateLimiterSkipped=true subsys=rate uuid=72def104-c04d-11eb-a3dd-066a077e2d80 waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.1.120 72dbcee5-c04d-11eb-a3dd-066a077e2d80  }\" containerID=fea149f34c3f02be8277fa9202f45f31f05261a98d23c00bdcec820c69593331 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc0426b282f1bf k8sPodName=job-4000/backofflimit-qvr5s labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=610 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=610 identityLabels=\"k8s:controller-uid=c3f9a2da-ab98-4474-a361-f269cf7616d5,k8s:io.cilium.k8s.namespace.labels.e2e-framework=job,k8s:io.cilium.k8s.namespace.labels.e2e-run=d2deb1ef-401b-4128-a5c7-b7213a17bf06,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=job-4000,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=job-4000,k8s:job-name=backofflimit,k8s:job=backofflimit\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:job k8s:io.cilium.k8s.namespace.labels.e2e-run:d2deb1ef-401b-4128-a5c7-b7213a17bf06 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:job-4000]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:controller-uid=c3f9a2da-ab98-4474-a361-f269cf7616d5;k8s:io.cilium.k8s.namespace.labels.e2e-framework=job;k8s:io.cilium.k8s.namespace.labels.e2e-run=d2deb1ef-401b-4128-a5c7-b7213a17bf06;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=job-4000;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=job-4000;k8s:job=backofflimit;k8s:job-name=backofflimit;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=610 identity=37824 identityLabels=\"k8s:controller-uid=c3f9a2da-ab98-4474-a361-f269cf7616d5,k8s:io.cilium.k8s.namespace.labels.e2e-framework=job,k8s:io.cilium.k8s.namespace.labels.e2e-run=d2deb1ef-401b-4128-a5c7-b7213a17bf06,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=job-4000,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=job-4000,k8s:job-name=backofflimit,k8s:job=backofflimit\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=610 identity=37824 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1796 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=1796 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=577.833375ms subsys=rate totalDuration=578.03428ms uuid=7299e264-c04d-11eb-a3dd-066a077e2d80 waitDurationTotal=0s\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=610 identity=37824 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=610 identity=37824 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=434.597015ms subsys=rate totalDuration=434.86359ms uuid=72def104-c04d-11eb-a3dd-066a077e2d80 waitDurationTotal=0s\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\n==== END logs for container cilium-agent of pod kube-system/cilium-8q8dv ====\n==== START logs for container clean-cilium-state of pod kube-system/cilium-9h72w ====\n==== END logs for container clean-cilium-state of pod kube-system/cilium-9h72w ====\n==== START logs for container cilium-agent of pod kube-system/cilium-9h72w ====\nlevel=info msg=\"Skipped reading configuration file\" reason=\"Config File \\\"ciliumd\\\" Not Found in \\\"[/root]\\\"\" subsys=config\nlevel=info msg=\"Started gops server\" address=\"127.0.0.1:9890\" subsys=daemon\nlevel=info msg=\"  --agent-health-port='9876'\" subsys=daemon\nlevel=info msg=\"  --agent-labels=''\" subsys=daemon\nlevel=info msg=\"  --allow-icmp-frag-needed='true'\" subsys=daemon\nlevel=info msg=\"  --allow-localhost='auto'\" subsys=daemon\nlevel=info msg=\"  --annotate-k8s-node='true'\" subsys=daemon\nlevel=info msg=\"  --api-rate-limit='map[]'\" subsys=daemon\nlevel=info msg=\"  --arping-refresh-period='5m0s'\" subsys=daemon\nlevel=info msg=\"  --auto-create-cilium-node-resource='true'\" subsys=daemon\nlevel=info msg=\"  --auto-direct-node-routes='false'\" subsys=daemon\nlevel=info msg=\"  --blacklist-conflicting-routes='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-compile-debug='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-any-max='262144'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-tcp-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-fin='10s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-syn='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-fragments-map-max='8192'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-table-size='16381'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-map-max='65536'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --bpf-map-dynamic-size-ratio='0'\" subsys=daemon\nlevel=info msg=\"  --bpf-nat-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-neigh-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-policy-map-max='16384'\" subsys=daemon\nlevel=info msg=\"  --bpf-root=''\" subsys=daemon\nlevel=info msg=\"  --bpf-sock-rev-map-max='262144'\" subsys=daemon\nlevel=info msg=\"  --certificates-directory='/var/run/cilium/certs'\" subsys=daemon\nlevel=info msg=\"  --cgroup-root=''\" subsys=daemon\nlevel=info msg=\"  --cluster-id='0'\" subsys=daemon\nlevel=info msg=\"  --cluster-name='default'\" subsys=daemon\nlevel=info msg=\"  --clustermesh-config='/var/lib/cilium/clustermesh/'\" subsys=daemon\nlevel=info msg=\"  --cmdref=''\" subsys=daemon\nlevel=info msg=\"  --config=''\" subsys=daemon\nlevel=info msg=\"  --config-dir='/tmp/cilium/config-map'\" subsys=daemon\nlevel=info msg=\"  --conntrack-gc-interval='0s'\" subsys=daemon\nlevel=info msg=\"  --crd-wait-timeout='5m0s'\" subsys=daemon\nlevel=info msg=\"  --datapath-mode='veth'\" subsys=daemon\nlevel=info msg=\"  --debug='false'\" subsys=daemon\nlevel=info msg=\"  --debug-verbose=''\" subsys=daemon\nlevel=info msg=\"  --device=''\" subsys=daemon\nlevel=info msg=\"  --devices=''\" subsys=daemon\nlevel=info msg=\"  --direct-routing-device=''\" subsys=daemon\nlevel=info msg=\"  --disable-cnp-status-updates='false'\" subsys=daemon\nlevel=info msg=\"  --disable-conntrack='false'\" subsys=daemon\nlevel=info msg=\"  --disable-endpoint-crd='false'\" subsys=daemon\nlevel=info msg=\"  --disable-envoy-version-check='false'\" subsys=daemon\nlevel=info msg=\"  --disable-iptables-feeder-rules=''\" subsys=daemon\nlevel=info msg=\"  --dns-max-ips-per-restored-rule='1000'\" subsys=daemon\nlevel=info msg=\"  --egress-masquerade-interfaces=''\" subsys=daemon\nlevel=info msg=\"  --egress-multi-home-ip-rule-compat='false'\" subsys=daemon\nlevel=info msg=\"  --enable-auto-protect-node-port-range='true'\" subsys=daemon\nlevel=info msg=\"  --enable-bandwidth-manager='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-clock-probe='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-masquerade='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-tproxy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-routes='false'\" subsys=daemon\nlevel=info msg=\"  --enable-external-ips='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-check-nodeport='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-firewall='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-legacy-routing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-reachable-services='false'\" subsys=daemon\nlevel=info msg=\"  --enable-hubble='false'\" subsys=daemon\nlevel=info msg=\"  --enable-identity-mark='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ip-masq-agent='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipsec='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4-fragment-tracking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6-ndp='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-api-discovery='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-endpoint-slice='true'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-event-handover='false'\" subsys=daemon\nlevel=info msg=\"  --enable-l7-proxy='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-node-route='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-redirect-policy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-monitor='true'\" subsys=daemon\nlevel=info msg=\"  --enable-node-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-policy='default'\" subsys=daemon\nlevel=info msg=\"  --enable-remote-node-identity='true'\" subsys=daemon\nlevel=info msg=\"  --enable-selective-regeneration='true'\" subsys=daemon\nlevel=info msg=\"  --enable-session-affinity='false'\" subsys=daemon\nlevel=info msg=\"  --enable-svc-source-range-check='true'\" subsys=daemon\nlevel=info msg=\"  --enable-tracing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-well-known-identities='true'\" subsys=daemon\nlevel=info msg=\"  --enable-xt-socket-fallback='true'\" subsys=daemon\nlevel=info msg=\"  --encrypt-interface=''\" subsys=daemon\nlevel=info msg=\"  --encrypt-node='false'\" subsys=daemon\nlevel=info msg=\"  --endpoint-interface-name-prefix='lxc+'\" subsys=daemon\nlevel=info msg=\"  --endpoint-queue-size='25'\" subsys=daemon\nlevel=info msg=\"  --endpoint-status=''\" subsys=daemon\nlevel=info msg=\"  --envoy-log=''\" subsys=daemon\nlevel=info msg=\"  --exclude-local-address=''\" subsys=daemon\nlevel=info msg=\"  --fixed-identity-mapping='map[]'\" subsys=daemon\nlevel=info msg=\"  --flannel-master-device=''\" subsys=daemon\nlevel=info msg=\"  --flannel-uninstall-on-exit='false'\" subsys=daemon\nlevel=info msg=\"  --force-local-policy-eval-at-source='true'\" subsys=daemon\nlevel=info msg=\"  --gops-port='9890'\" subsys=daemon\nlevel=info msg=\"  --host-reachable-services-protos='tcp,udp'\" subsys=daemon\nlevel=info msg=\"  --http-403-msg=''\" subsys=daemon\nlevel=info msg=\"  --http-idle-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-max-grpc-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-normalize-path='true'\" subsys=daemon\nlevel=info msg=\"  --http-request-timeout='3600'\" subsys=daemon\nlevel=info msg=\"  --http-retry-count='3'\" subsys=daemon\nlevel=info msg=\"  --http-retry-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-disable-tls='false'\" subsys=daemon\nlevel=info msg=\"  --hubble-event-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-flow-buffer-size='4095'\" subsys=daemon\nlevel=info msg=\"  --hubble-listen-address=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics-server=''\" subsys=daemon\nlevel=info msg=\"  --hubble-socket-path='/var/run/cilium/hubble.sock'\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-cert-file=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-client-ca-files=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-key-file=''\" subsys=daemon\nlevel=info msg=\"  --identity-allocation-mode='crd'\" subsys=daemon\nlevel=info msg=\"  --identity-change-grace-period='5s'\" subsys=daemon\nlevel=info msg=\"  --install-iptables-rules='true'\" subsys=daemon\nlevel=info msg=\"  --ip-allocation-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'\" subsys=daemon\nlevel=info msg=\"  --ipam='kubernetes'\" subsys=daemon\nlevel=info msg=\"  --ipsec-key-file=''\" subsys=daemon\nlevel=info msg=\"  --iptables-lock-timeout='5s'\" subsys=daemon\nlevel=info msg=\"  --iptables-random-fully='false'\" subsys=daemon\nlevel=info msg=\"  --ipv4-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv4-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-loopback-address='169.254.42.1'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-cluster-alloc-cidr='f00d::/64'\" subsys=daemon\nlevel=info msg=\"  --ipv6-mcast-device=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipvlan-master-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --join-cluster='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-api-server=''\" subsys=daemon\nlevel=info msg=\"  --k8s-force-json-patch='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-heartbeat-timeout='30s'\" subsys=daemon\nlevel=info msg=\"  --k8s-kubeconfig-path=''\" subsys=daemon\nlevel=info msg=\"  --k8s-namespace='kube-system'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv4-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv6-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-cache-size='128'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-proxy-name=''\" subsys=daemon\nlevel=info msg=\"  --k8s-sync-timeout='3m0s'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-queue-size='1024'\" subsys=daemon\nlevel=info msg=\"  --keep-config='false'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement='strict'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement-healthz-bind-address=''\" subsys=daemon\nlevel=info msg=\"  --kvstore=''\" subsys=daemon\nlevel=info msg=\"  --kvstore-connectivity-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-lease-ttl='15m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --kvstore-periodic-sync='5m0s'\" subsys=daemon\nlevel=info msg=\"  --label-prefix-file=''\" subsys=daemon\nlevel=info msg=\"  --labels=''\" subsys=daemon\nlevel=info msg=\"  --lib-dir='/var/lib/cilium'\" subsys=daemon\nlevel=info msg=\"  --log-driver=''\" subsys=daemon\nlevel=info msg=\"  --log-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --log-system-load='false'\" subsys=daemon\nlevel=info msg=\"  --masquerade='true'\" subsys=daemon\nlevel=info msg=\"  --max-controller-interval='0'\" subsys=daemon\nlevel=info msg=\"  --metrics=''\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation='medium'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-flags='syn,fin,rst'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-interval='5s'\" subsys=daemon\nlevel=info msg=\"  --monitor-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --mtu='0'\" subsys=daemon\nlevel=info msg=\"  --nat46-range='0:0:0:0:0:FFFF::/96'\" subsys=daemon\nlevel=info msg=\"  --native-routing-cidr=''\" subsys=daemon\nlevel=info msg=\"  --node-port-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --node-port-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --node-port-bind-protection='true'\" subsys=daemon\nlevel=info msg=\"  --node-port-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --node-port-range='30000,32767'\" subsys=daemon\nlevel=info msg=\"  --policy-audit-mode='false'\" subsys=daemon\nlevel=info msg=\"  --policy-queue-size='100'\" subsys=daemon\nlevel=info msg=\"  --policy-trigger-interval='1s'\" subsys=daemon\nlevel=info msg=\"  --pprof='false'\" subsys=daemon\nlevel=info msg=\"  --preallocate-bpf-maps='false'\" subsys=daemon\nlevel=info msg=\"  --prefilter-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --prefilter-mode='native'\" subsys=daemon\nlevel=info msg=\"  --prepend-iptables-chains='true'\" subsys=daemon\nlevel=info msg=\"  --prometheus-serve-addr=''\" subsys=daemon\nlevel=info msg=\"  --proxy-connect-timeout='1'\" subsys=daemon\nlevel=info msg=\"  --proxy-prometheus-port='0'\" subsys=daemon\nlevel=info msg=\"  --read-cni-conf=''\" subsys=daemon\nlevel=info msg=\"  --restore='true'\" subsys=daemon\nlevel=info msg=\"  --sidecar-istio-proxy-image='cilium/istio_proxy'\" subsys=daemon\nlevel=info msg=\"  --single-cluster-route='false'\" subsys=daemon\nlevel=info msg=\"  --skip-crd-creation='false'\" subsys=daemon\nlevel=info msg=\"  --socket-path='/var/run/cilium/cilium.sock'\" subsys=daemon\nlevel=info msg=\"  --sockops-enable='false'\" subsys=daemon\nlevel=info msg=\"  --state-dir='/var/run/cilium'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-dns-reject-response-code='refused'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-enable-dns-compression='true'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-endpoint-max-ip-per-hostname='50'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-idle-connection-grace-period='0s'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-max-deferred-connection-deletes='10000'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-min-ttl='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-pre-cache=''\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-port='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-response-max-delay='100ms'\" subsys=daemon\nlevel=info msg=\"  --trace-payloadlen='128'\" subsys=daemon\nlevel=info msg=\"  --tunnel='vxlan'\" subsys=daemon\nlevel=info msg=\"  --version='false'\" subsys=daemon\nlevel=info msg=\"  --write-cni-conf-when-ready=''\" subsys=daemon\nlevel=info msg=\"     _ _ _\" subsys=daemon\nlevel=info msg=\" ___|_| |_|_ _ _____\" subsys=daemon\nlevel=info msg=\"|  _| | | | | |     |\" subsys=daemon\nlevel=info msg=\"|___|_|_|_|___|_|_|_|\" subsys=daemon\nlevel=info msg=\"Cilium 1.9.7 f993696 2021-05-12T18:21:30-07:00 go version go1.15.12 linux/amd64\" subsys=daemon\nlevel=info msg=\"cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL\" subsys=daemon\nlevel=info msg=\"clang (10.0.0) and kernel (4.19.0) versions: OK!\" subsys=linux-datapath\nlevel=info msg=\"linking environment: OK!\" subsys=linux-datapath\nlevel=info msg=\"Detected mounted BPF filesystem at /sys/fs/bpf\" subsys=bpf\nlevel=info msg=\"Parsing base label prefixes from default label list\" subsys=labels-filter\nlevel=info msg=\"Parsing additional label prefixes from user inputs: []\" subsys=labels-filter\nlevel=info msg=\"Final label prefixes to be used for identity evaluation:\" subsys=labels-filter\nlevel=info msg=\" - reserved:.*\" subsys=labels-filter\nlevel=info msg=\" - :io.kubernetes.pod.namespace\" subsys=labels-filter\nlevel=info msg=\" - :io.cilium.k8s.namespace.labels\" subsys=labels-filter\nlevel=info msg=\" - :app.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:io.kubernetes\" subsys=labels-filter\nlevel=info msg=\" - !:kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:.*beta.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:k8s.io\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-generation\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-hash\" subsys=labels-filter\nlevel=info msg=\" - !:controller-revision-hash\" subsys=labels-filter\nlevel=info msg=\" - !:annotation.*\" subsys=labels-filter\nlevel=info msg=\" - !:etcd_node\" subsys=labels-filter\nlevel=info msg=\"Using autogenerated IPv4 allocation range\" subsys=node v4Prefix=10.217.0.0/16\nlevel=info msg=\"Initializing daemon\" subsys=daemon\nlevel=info msg=\"Establishing connection to apiserver\" host=\"https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:443\" subsys=k8s\nlevel=info msg=\"Establishing connection to apiserver\" host=\"https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:443\" subsys=k8s\nlevel=info msg=\"Connected to apiserver\" subsys=k8s\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Trying to auto-enable \\\"enable-node-port\\\", \\\"enable-external-ips\\\", \\\"enable-host-reachable-services\\\", \\\"enable-host-port\\\", \\\"enable-session-affinity\\\" features\" subsys=daemon\nlevel=warning msg=\"Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.\" subsys=daemon\nlevel=info msg=\"BPF host routing is only available in native routing mode. Falling back to legacy host routing (enable-host-legacy-routing=true).\" subsys=daemon\nlevel=info msg=\"Inheriting MTU from external network interface\" device=ens5 ipAddr=172.20.36.217 mtu=9001 subsys=mtu\nlevel=info msg=\"Restored services from maps\" failed=0 restored=0 subsys=service\nlevel=info msg=\"Reading old endpoints...\" subsys=daemon\nlevel=info msg=\"No old endpoints found.\" subsys=daemon\nlevel=info msg=\"Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock\" subsys=envoy-manager\nlevel=error msg=\"Command execution failed\" cmd=\"[iptables -t mangle -n -L CILIUM_PRE_mangle]\" error=\"exit status 1\" subsys=iptables\nlevel=warning msg=\"# Warning: iptables-legacy tables present, use iptables-legacy to see them\" subsys=iptables\nlevel=warning msg=\"iptables: No chain/target/match by that name.\" subsys=iptables\nlevel=info msg=\"Waiting until all Cilium CRDs are available\" subsys=k8s\nlevel=info msg=\"All Cilium CRDs have been found and are available\" subsys=k8s\nlevel=info msg=\"Retrieved node information from kubernetes node\" nodeName=ip-172-20-36-217.ap-southeast-1.compute.internal subsys=k8s\nlevel=info msg=\"Received own node information from API server\" ipAddr.ipv4=172.20.36.217 ipAddr.ipv6=\"<nil>\" k8sNodeIP=172.20.36.217 labels=\"map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:master-ap-southeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-217.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a]\" nodeName=ip-172-20-36-217.ap-southeast-1.compute.internal subsys=k8s v4Prefix=100.96.0.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"k8s mode: Allowing localhost to reach local endpoints\" subsys=daemon\nlevel=info msg=\"Using auto-derived devices for BPF node port\" devices=\"[ens5]\" directRoutingDevice=ens5 subsys=daemon\nlevel=info msg=\"Enabling k8s event listener\" subsys=k8s-watcher\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Removing stale endpoint interfaces\" subsys=daemon\nlevel=info msg=\"Skipping kvstore configuration\" subsys=daemon\nlevel=info msg=\"Initializing node addressing\" subsys=daemon\nlevel=info msg=\"Initializing kubernetes IPAM\" subsys=ipam v4Prefix=100.96.0.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"Restoring endpoints...\" subsys=daemon\nlevel=info msg=\"Endpoints restored\" failed=0 restored=0 subsys=daemon\nlevel=info msg=\"Addressing information:\" subsys=daemon\nlevel=info msg=\"  Cluster-Name: default\" subsys=daemon\nlevel=info msg=\"  Cluster-ID: 0\" subsys=daemon\nlevel=info msg=\"  Local node-name: ip-172-20-36-217.ap-southeast-1.compute.internal\" subsys=daemon\nlevel=info msg=\"  Node-IPv6: <nil>\" subsys=daemon\nlevel=info msg=\"  External-Node IPv4: 172.20.36.217\" subsys=daemon\nlevel=info msg=\"  Internal-Node IPv4: 100.96.0.128\" subsys=daemon\nlevel=info msg=\"  IPv4 allocation prefix: 100.96.0.0/24\" subsys=daemon\nlevel=info msg=\"  Loopback IPv4: 169.254.42.1\" subsys=daemon\nlevel=info msg=\"  Local IPv4 addresses:\" subsys=daemon\nlevel=info msg=\"  - 172.20.36.217\" subsys=daemon\nlevel=info msg=\"Creating or updating CiliumNode resource\" node=ip-172-20-36-217.ap-southeast-1.compute.internal subsys=nodediscovery\nlevel=info msg=\"Waiting until all pre-existing resources related to policy have been received\" subsys=k8s-watcher\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Adding local node to cluster\" node=\"{ip-172-20-36-217.ap-southeast-1.compute.internal default [{ExternalIP 13.212.113.26} {InternalIP 172.20.36.217} {CiliumInternalIP 100.96.0.128}] 100.96.0.0/24 <nil> 100.96.0.127 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:master-ap-southeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-217.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] 6}\" subsys=nodediscovery\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Successfully created CiliumNode resource\" subsys=nodediscovery\nlevel=info msg=\"Annotating k8s node\" subsys=daemon v4CiliumHostIP.IPv4=100.96.0.128 v4Prefix=100.96.0.0/24 v4healthIP.IPv4=100.96.0.127 v6CiliumHostIP.IPv6=\"<nil>\" v6Prefix=\"<nil>\" v6healthIP.IPv6=\"<nil>\"\nlevel=info msg=\"Initializing identity allocator\" subsys=identity-cache\nlevel=info msg=\"Cluster-ID is not specified, skipping ClusterMesh initialization\" subsys=daemon\nlevel=info msg=\"Setting up BPF datapath\" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0\nlevel=info msg=\"All pre-existing resources related to policy have been received; continuing\" subsys=k8s-watcher\nlevel=info msg=\"Adding new proxy port rules for cilium-dns-egress:46259\" proxy port name=cilium-dns-egress subsys=proxy\nlevel=info msg=\"Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock\" subsys=monitor-agent\nlevel=info msg=\"Validating configured node address ranges\" subsys=daemon\nlevel=info msg=\"Starting connection tracking garbage collector\" subsys=daemon\nlevel=info msg=\"Starting IP identity watcher\" subsys=ipcache\nlevel=info msg=\"Initial scan of connection tracking completed\" subsys=ct-gc\nlevel=info msg=\"Regenerating restored endpoints\" numRestored=0 subsys=daemon\nlevel=info msg=\"Datapath signal listener running\" subsys=signal\nlevel=info msg=\"Creating host endpoint\" subsys=daemon\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=178 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=178 identityLabels=\"k8s:kops.k8s.io/instancegroup=master-ap-southeast-1a,k8s:kops.k8s.io/kops-controller-pki,k8s:node-role.kubernetes.io/control-plane,k8s:node-role.kubernetes.io/master,k8s:node.kubernetes.io/exclude-from-external-load-balancers,k8s:node.kubernetes.io/instance-type=c5.large,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=178 identity=1 identityLabels=\"k8s:kops.k8s.io/instancegroup=master-ap-southeast-1a,k8s:kops.k8s.io/kops-controller-pki,k8s:node-role.kubernetes.io/control-plane,k8s:node-role.kubernetes.io/master,k8s:node.kubernetes.io/exclude-from-external-load-balancers,k8s:node.kubernetes.io/instance-type=c5.large,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Launching Cilium health daemon\" subsys=daemon\nlevel=info msg=\"Finished regenerating restored endpoints\" regenerated=0 subsys=daemon total=0\nlevel=info msg=\"Launching Cilium health endpoint\" subsys=daemon\nlevel=info msg=\"Started healthz status API server\" address=\"127.0.0.1:9876\" subsys=daemon\nlevel=info msg=\"Initializing Cilium API\" subsys=daemon\nlevel=info msg=\"Daemon initialization completed\" bootstrapTime=43.064415571s subsys=daemon\nlevel=info msg=\"Serving cilium API at unix:///var/run/cilium/cilium.sock\" subsys=daemon\nlevel=info msg=\"Hubble server is disabled\" subsys=hubble\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2428 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2428 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2428 identity=4 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.253901024s file-path=/var/run/cilium/state/templates/e16729f2300116e99949acd50a993ad840d0ad9c/bpf_host.o subsys=datapath-loader\nlevel=info msg=\"Serving cilium health API at unix:///var/run/cilium/health.sock\" subsys=health-server\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=178 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.750217048s file-path=/var/run/cilium/state/templates/45b22414f2e8da38cd4f55f2e1f506003e9abf4d/bpf_lxc.o subsys=datapath-loader\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2428 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason= subsys=endpoint-manager\n==== END logs for container cilium-agent of pod kube-system/cilium-9h72w ====\n==== START logs for container cilium-operator of pod kube-system/cilium-operator-7cd4557b96-jxr8n ====\nlevel=info msg=\"Skipped reading configuration file\" reason=\"Config File \\\"cilium-operators\\\" Not Found in \\\"[/]\\\"\" subsys=config\nlevel=info msg=\"Started gops server\" address=\"127.0.0.1:9891\" subsys=cilium-operator\nlevel=info msg=\"  --aws-instance-limit-mapping='map[]'\" subsys=cilium-operator\nlevel=info msg=\"  --aws-release-excess-ips='false'\" subsys=cilium-operator\nlevel=info msg=\"  --azure-cloud-name='AzurePublicCloud'\" subsys=cilium-operator\nlevel=info msg=\"  --azure-resource-group=''\" subsys=cilium-operator\nlevel=info msg=\"  --azure-subscription-id=''\" subsys=cilium-operator\nlevel=info msg=\"  --azure-use-primary-address='true'\" subsys=cilium-operator\nlevel=info msg=\"  --azure-user-assigned-identity-id=''\" subsys=cilium-operator\nlevel=info msg=\"  --cilium-endpoint-gc-interval='5m0s'\" subsys=cilium-operator\nlevel=info msg=\"  --cluster-id='0'\" subsys=cilium-operator\nlevel=info msg=\"  --cluster-name='default'\" subsys=cilium-operator\nlevel=info msg=\"  --cluster-pool-ipv4-cidr=''\" subsys=cilium-operator\nlevel=info msg=\"  --cluster-pool-ipv4-mask-size='24'\" subsys=cilium-operator\nlevel=info msg=\"  --cluster-pool-ipv6-cidr=''\" subsys=cilium-operator\nlevel=info msg=\"  --cluster-pool-ipv6-mask-size='112'\" subsys=cilium-operator\nlevel=info msg=\"  --cmdref=''\" subsys=cilium-operator\nlevel=info msg=\"  --cnp-node-status-gc-interval='2m0s'\" subsys=cilium-operator\nlevel=info msg=\"  --cnp-status-update-interval='1s'\" subsys=cilium-operator\nlevel=info msg=\"  --config=''\" subsys=cilium-operator\nlevel=info msg=\"  --config-dir='/tmp/cilium/config-map'\" subsys=cilium-operator\nlevel=info msg=\"  --crd-wait-timeout='5m0s'\" subsys=cilium-operator\nlevel=info msg=\"  --debug='false'\" subsys=cilium-operator\nlevel=info msg=\"  --disable-cnp-status-updates='false'\" subsys=cilium-operator\nlevel=info msg=\"  --disable-endpoint-crd='false'\" subsys=cilium-operator\nlevel=info msg=\"  --ec2-api-endpoint=''\" subsys=cilium-operator\nlevel=info msg=\"  --enable-ipv4='true'\" subsys=cilium-operator\nlevel=info msg=\"  --enable-ipv6='false'\" subsys=cilium-operator\nlevel=info msg=\"  --enable-k8s-api-discovery='false'\" subsys=cilium-operator\nlevel=info msg=\"  --enable-k8s-endpoint-slice='true'\" subsys=cilium-operator\nlevel=info msg=\"  --enable-k8s-event-handover='false'\" subsys=cilium-operator\nlevel=info msg=\"  --enable-metrics='false'\" subsys=cilium-operator\nlevel=info msg=\"  --eni-tags='map[KubernetesCluster:e2e-459b123097-cb70c.test-cncf-aws.k8s.io]'\" subsys=cilium-operator\nlevel=info msg=\"  --gops-port='9891'\" subsys=cilium-operator\nlevel=info msg=\"  --identity-allocation-mode='crd'\" subsys=cilium-operator\nlevel=info msg=\"  --identity-gc-interval='15m0s'\" subsys=cilium-operator\nlevel=info msg=\"  --identity-gc-rate-interval='1m0s'\" subsys=cilium-operator\nlevel=info msg=\"  --identity-gc-rate-limit='2500'\" subsys=cilium-operator\nlevel=info msg=\"  --identity-heartbeat-timeout='30m0s'\" subsys=cilium-operator\nlevel=info msg=\"  --ipam='kubernetes'\" subsys=cilium-operator\nlevel=info msg=\"  --k8s-api-server=''\" subsys=cilium-operator\nlevel=info msg=\"  --k8s-heartbeat-timeout='30s'\" subsys=cilium-operator\nlevel=info msg=\"  --k8s-kubeconfig-path=''\" subsys=cilium-operator\nlevel=info msg=\"  --k8s-namespace='kube-system'\" subsys=cilium-operator\nlevel=info msg=\"  --k8s-service-proxy-name=''\" subsys=cilium-operator\nlevel=info msg=\"  --kvstore=''\" subsys=cilium-operator\nlevel=info msg=\"  --kvstore-opt='map[]'\" subsys=cilium-operator\nlevel=info msg=\"  --leader-election-lease-duration='15s'\" subsys=cilium-operator\nlevel=info msg=\"  --leader-election-renew-deadline='10s'\" subsys=cilium-operator\nlevel=info msg=\"  --leader-election-retry-period='2s'\" subsys=cilium-operator\nlevel=info msg=\"  --limit-ipam-api-burst='4'\" subsys=cilium-operator\nlevel=info msg=\"  --limit-ipam-api-qps='20'\" subsys=cilium-operator\nlevel=info msg=\"  --log-driver=''\" subsys=cilium-operator\nlevel=info msg=\"  --log-opt='map[]'\" subsys=cilium-operator\nlevel=info msg=\"  --nodes-gc-interval='2m0s'\" subsys=cilium-operator\nlevel=info msg=\"  --operator-api-serve-addr='localhost:9234'\" subsys=cilium-operator\nlevel=info msg=\"  --operator-prometheus-serve-addr=':6942'\" subsys=cilium-operator\nlevel=info msg=\"  --parallel-alloc-workers='50'\" subsys=cilium-operator\nlevel=info msg=\"  --subnet-ids-filter=''\" subsys=cilium-operator\nlevel=info msg=\"  --subnet-tags-filter='[]'\" subsys=cilium-operator\nlevel=info msg=\"  --synchronize-k8s-nodes='true'\" subsys=cilium-operator\nlevel=info msg=\"  --synchronize-k8s-services='true'\" subsys=cilium-operator\nlevel=info msg=\"  --unmanaged-pod-watcher-interval='15'\" subsys=cilium-operator\nlevel=info msg=\"  --update-ec2-adapter-limit-via-api='false'\" subsys=cilium-operator\nlevel=info msg=\"  --update-ec2-apdater-limit-via-api='false'\" subsys=cilium-operator\nlevel=info msg=\"  --version='false'\" subsys=cilium-operator\nlevel=info msg=\"Cilium Operator 1.9.7 f993696 2021-05-12T18:21:30-07:00 go version go1.15.12 linux/amd64\" subsys=cilium-operator\nlevel=info msg=\"Establishing connection to apiserver\" host=\"https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:443\" subsys=k8s\nlevel=info msg=\"Starting apiserver on address localhost:9234\" subsys=cilium-operator\nlevel=info msg=\"Establishing connection to apiserver\" host=\"https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:443\" subsys=k8s\nlevel=info msg=\"Connected to apiserver\" subsys=k8s\nlevel=info msg=\"Creating CRD (CustomResourceDefinition)...\" name=CiliumLocalRedirectPolicy/v2 subsys=k8s\nlevel=info msg=\"Creating CRD (CustomResourceDefinition)...\" name=CiliumIdentity/v2 subsys=k8s\nlevel=info msg=\"Creating CRD (CustomResourceDefinition)...\" name=CiliumExternalWorkload/v2 subsys=k8s\nlevel=info msg=\"Creating CRD (CustomResourceDefinition)...\" name=CiliumEndpoint/v2 subsys=k8s\nlevel=info msg=\"Creating CRD (CustomResourceDefinition)...\" name=CiliumNode/v2 subsys=k8s\nlevel=info msg=\"Creating CRD (CustomResourceDefinition)...\" name=CiliumClusterwideNetworkPolicy/v2 subsys=k8s\nlevel=info msg=\"Creating CRD (CustomResourceDefinition)...\" name=CiliumNetworkPolicy/v2 subsys=k8s\nlevel=info msg=\"CRD (CustomResourceDefinition) is installed and up-to-date\" name=CiliumExternalWorkload/v2 subsys=k8s\nlevel=info msg=\"CRD (CustomResourceDefinition) is installed and up-to-date\" name=CiliumEndpoint/v2 subsys=k8s\nlevel=info msg=\"CRD (CustomResourceDefinition) is installed and up-to-date\" name=CiliumIdentity/v2 subsys=k8s\nlevel=info msg=\"CRD (CustomResourceDefinition) is installed and up-to-date\" name=CiliumNode/v2 subsys=k8s\nlevel=info msg=\"CRD (CustomResourceDefinition) is installed and up-to-date\" name=CiliumLocalRedirectPolicy/v2 subsys=k8s\nlevel=info msg=\"CRD (CustomResourceDefinition) is installed and up-to-date\" name=CiliumClusterwideNetworkPolicy/v2 subsys=k8s\nlevel=info msg=\"CRD (CustomResourceDefinition) is installed and up-to-date\" name=CiliumNetworkPolicy/v2 subsys=k8s\nlevel=info msg=\"attempting to acquire leader lease  kube-system/cilium-operator-resource-lock...\" subsys=klog\nlevel=info msg=\"successfully acquired lease kube-system/cilium-operator-resource-lock\" subsys=klog\nlevel=info msg=\"Leading the operator HA deployment\" subsys=cilium-operator\nlevel=info msg=\"Initializing IPAM\" mode=kubernetes subsys=cilium-operator\nlevel=info msg=\"Starting to synchronize CiliumNode custom resources...\" subsys=cilium-operator\nlevel=info msg=\"Starting to garbage collect stale CiliumEndpoint custom resources...\" subsys=cilium-operator\nlevel=info msg=\"Starting CRD identity garbage collector with 15m0s interval...\" subsys=cilium-operator\nlevel=info msg=\"Starting CNP derivative handler...\" subsys=cilium-operator\nlevel=info msg=\"Starting CCNP derivative handler...\" subsys=cilium-operator\nlevel=info msg=\"Initialization complete\" subsys=cilium-operator\n==== END logs for container cilium-operator of pod kube-system/cilium-operator-7cd4557b96-jxr8n ====\n==== START logs for container clean-cilium-state of pod kube-system/cilium-ph5rg ====\n==== END logs for container clean-cilium-state of pod kube-system/cilium-ph5rg ====\n==== START logs for container cilium-agent of pod kube-system/cilium-ph5rg ====\nlevel=info msg=\"Skipped reading configuration file\" reason=\"Config File \\\"ciliumd\\\" Not Found in \\\"[/root]\\\"\" subsys=config\nlevel=info msg=\"Started gops server\" address=\"127.0.0.1:9890\" subsys=daemon\nlevel=info msg=\"  --agent-health-port='9876'\" subsys=daemon\nlevel=info msg=\"  --agent-labels=''\" subsys=daemon\nlevel=info msg=\"  --allow-icmp-frag-needed='true'\" subsys=daemon\nlevel=info msg=\"  --allow-localhost='auto'\" subsys=daemon\nlevel=info msg=\"  --annotate-k8s-node='true'\" subsys=daemon\nlevel=info msg=\"  --api-rate-limit='map[]'\" subsys=daemon\nlevel=info msg=\"  --arping-refresh-period='5m0s'\" subsys=daemon\nlevel=info msg=\"  --auto-create-cilium-node-resource='true'\" subsys=daemon\nlevel=info msg=\"  --auto-direct-node-routes='false'\" subsys=daemon\nlevel=info msg=\"  --blacklist-conflicting-routes='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-compile-debug='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-any-max='262144'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-tcp-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-fin='10s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-syn='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-fragments-map-max='8192'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-table-size='16381'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-map-max='65536'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --bpf-map-dynamic-size-ratio='0'\" subsys=daemon\nlevel=info msg=\"  --bpf-nat-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-neigh-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-policy-map-max='16384'\" subsys=daemon\nlevel=info msg=\"  --bpf-root=''\" subsys=daemon\nlevel=info msg=\"  --bpf-sock-rev-map-max='262144'\" subsys=daemon\nlevel=info msg=\"  --certificates-directory='/var/run/cilium/certs'\" subsys=daemon\nlevel=info msg=\"  --cgroup-root=''\" subsys=daemon\nlevel=info msg=\"  --cluster-id='0'\" subsys=daemon\nlevel=info msg=\"  --cluster-name='default'\" subsys=daemon\nlevel=info msg=\"  --clustermesh-config='/var/lib/cilium/clustermesh/'\" subsys=daemon\nlevel=info msg=\"  --cmdref=''\" subsys=daemon\nlevel=info msg=\"  --config=''\" subsys=daemon\nlevel=info msg=\"  --config-dir='/tmp/cilium/config-map'\" subsys=daemon\nlevel=info msg=\"  --conntrack-gc-interval='0s'\" subsys=daemon\nlevel=info msg=\"  --crd-wait-timeout='5m0s'\" subsys=daemon\nlevel=info msg=\"  --datapath-mode='veth'\" subsys=daemon\nlevel=info msg=\"  --debug='false'\" subsys=daemon\nlevel=info msg=\"  --debug-verbose=''\" subsys=daemon\nlevel=info msg=\"  --device=''\" subsys=daemon\nlevel=info msg=\"  --devices=''\" subsys=daemon\nlevel=info msg=\"  --direct-routing-device=''\" subsys=daemon\nlevel=info msg=\"  --disable-cnp-status-updates='false'\" subsys=daemon\nlevel=info msg=\"  --disable-conntrack='false'\" subsys=daemon\nlevel=info msg=\"  --disable-endpoint-crd='false'\" subsys=daemon\nlevel=info msg=\"  --disable-envoy-version-check='false'\" subsys=daemon\nlevel=info msg=\"  --disable-iptables-feeder-rules=''\" subsys=daemon\nlevel=info msg=\"  --dns-max-ips-per-restored-rule='1000'\" subsys=daemon\nlevel=info msg=\"  --egress-masquerade-interfaces=''\" subsys=daemon\nlevel=info msg=\"  --egress-multi-home-ip-rule-compat='false'\" subsys=daemon\nlevel=info msg=\"  --enable-auto-protect-node-port-range='true'\" subsys=daemon\nlevel=info msg=\"  --enable-bandwidth-manager='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-clock-probe='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-masquerade='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-tproxy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-routes='false'\" subsys=daemon\nlevel=info msg=\"  --enable-external-ips='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-check-nodeport='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-firewall='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-legacy-routing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-reachable-services='false'\" subsys=daemon\nlevel=info msg=\"  --enable-hubble='false'\" subsys=daemon\nlevel=info msg=\"  --enable-identity-mark='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ip-masq-agent='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipsec='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4-fragment-tracking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6-ndp='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-api-discovery='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-endpoint-slice='true'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-event-handover='false'\" subsys=daemon\nlevel=info msg=\"  --enable-l7-proxy='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-node-route='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-redirect-policy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-monitor='true'\" subsys=daemon\nlevel=info msg=\"  --enable-node-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-policy='default'\" subsys=daemon\nlevel=info msg=\"  --enable-remote-node-identity='true'\" subsys=daemon\nlevel=info msg=\"  --enable-selective-regeneration='true'\" subsys=daemon\nlevel=info msg=\"  --enable-session-affinity='false'\" subsys=daemon\nlevel=info msg=\"  --enable-svc-source-range-check='true'\" subsys=daemon\nlevel=info msg=\"  --enable-tracing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-well-known-identities='true'\" subsys=daemon\nlevel=info msg=\"  --enable-xt-socket-fallback='true'\" subsys=daemon\nlevel=info msg=\"  --encrypt-interface=''\" subsys=daemon\nlevel=info msg=\"  --encrypt-node='false'\" subsys=daemon\nlevel=info msg=\"  --endpoint-interface-name-prefix='lxc+'\" subsys=daemon\nlevel=info msg=\"  --endpoint-queue-size='25'\" subsys=daemon\nlevel=info msg=\"  --endpoint-status=''\" subsys=daemon\nlevel=info msg=\"  --envoy-log=''\" subsys=daemon\nlevel=info msg=\"  --exclude-local-address=''\" subsys=daemon\nlevel=info msg=\"  --fixed-identity-mapping='map[]'\" subsys=daemon\nlevel=info msg=\"  --flannel-master-device=''\" subsys=daemon\nlevel=info msg=\"  --flannel-uninstall-on-exit='false'\" subsys=daemon\nlevel=info msg=\"  --force-local-policy-eval-at-source='true'\" subsys=daemon\nlevel=info msg=\"  --gops-port='9890'\" subsys=daemon\nlevel=info msg=\"  --host-reachable-services-protos='tcp,udp'\" subsys=daemon\nlevel=info msg=\"  --http-403-msg=''\" subsys=daemon\nlevel=info msg=\"  --http-idle-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-max-grpc-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-normalize-path='true'\" subsys=daemon\nlevel=info msg=\"  --http-request-timeout='3600'\" subsys=daemon\nlevel=info msg=\"  --http-retry-count='3'\" subsys=daemon\nlevel=info msg=\"  --http-retry-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-disable-tls='false'\" subsys=daemon\nlevel=info msg=\"  --hubble-event-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-flow-buffer-size='4095'\" subsys=daemon\nlevel=info msg=\"  --hubble-listen-address=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics-server=''\" subsys=daemon\nlevel=info msg=\"  --hubble-socket-path='/var/run/cilium/hubble.sock'\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-cert-file=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-client-ca-files=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-key-file=''\" subsys=daemon\nlevel=info msg=\"  --identity-allocation-mode='crd'\" subsys=daemon\nlevel=info msg=\"  --identity-change-grace-period='5s'\" subsys=daemon\nlevel=info msg=\"  --install-iptables-rules='true'\" subsys=daemon\nlevel=info msg=\"  --ip-allocation-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'\" subsys=daemon\nlevel=info msg=\"  --ipam='kubernetes'\" subsys=daemon\nlevel=info msg=\"  --ipsec-key-file=''\" subsys=daemon\nlevel=info msg=\"  --iptables-lock-timeout='5s'\" subsys=daemon\nlevel=info msg=\"  --iptables-random-fully='false'\" subsys=daemon\nlevel=info msg=\"  --ipv4-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv4-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-loopback-address='169.254.42.1'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-cluster-alloc-cidr='f00d::/64'\" subsys=daemon\nlevel=info msg=\"  --ipv6-mcast-device=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipvlan-master-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --join-cluster='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-api-server=''\" subsys=daemon\nlevel=info msg=\"  --k8s-force-json-patch='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-heartbeat-timeout='30s'\" subsys=daemon\nlevel=info msg=\"  --k8s-kubeconfig-path=''\" subsys=daemon\nlevel=info msg=\"  --k8s-namespace='kube-system'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv4-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv6-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-cache-size='128'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-proxy-name=''\" subsys=daemon\nlevel=info msg=\"  --k8s-sync-timeout='3m0s'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-queue-size='1024'\" subsys=daemon\nlevel=info msg=\"  --keep-config='false'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement='strict'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement-healthz-bind-address=''\" subsys=daemon\nlevel=info msg=\"  --kvstore=''\" subsys=daemon\nlevel=info msg=\"  --kvstore-connectivity-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-lease-ttl='15m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --kvstore-periodic-sync='5m0s'\" subsys=daemon\nlevel=info msg=\"  --label-prefix-file=''\" subsys=daemon\nlevel=info msg=\"  --labels=''\" subsys=daemon\nlevel=info msg=\"  --lib-dir='/var/lib/cilium'\" subsys=daemon\nlevel=info msg=\"  --log-driver=''\" subsys=daemon\nlevel=info msg=\"  --log-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --log-system-load='false'\" subsys=daemon\nlevel=info msg=\"  --masquerade='true'\" subsys=daemon\nlevel=info msg=\"  --max-controller-interval='0'\" subsys=daemon\nlevel=info msg=\"  --metrics=''\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation='medium'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-flags='syn,fin,rst'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-interval='5s'\" subsys=daemon\nlevel=info msg=\"  --monitor-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --mtu='0'\" subsys=daemon\nlevel=info msg=\"  --nat46-range='0:0:0:0:0:FFFF::/96'\" subsys=daemon\nlevel=info msg=\"  --native-routing-cidr=''\" subsys=daemon\nlevel=info msg=\"  --node-port-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --node-port-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --node-port-bind-protection='true'\" subsys=daemon\nlevel=info msg=\"  --node-port-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --node-port-range='30000,32767'\" subsys=daemon\nlevel=info msg=\"  --policy-audit-mode='false'\" subsys=daemon\nlevel=info msg=\"  --policy-queue-size='100'\" subsys=daemon\nlevel=info msg=\"  --policy-trigger-interval='1s'\" subsys=daemon\nlevel=info msg=\"  --pprof='false'\" subsys=daemon\nlevel=info msg=\"  --preallocate-bpf-maps='false'\" subsys=daemon\nlevel=info msg=\"  --prefilter-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --prefilter-mode='native'\" subsys=daemon\nlevel=info msg=\"  --prepend-iptables-chains='true'\" subsys=daemon\nlevel=info msg=\"  --prometheus-serve-addr=''\" subsys=daemon\nlevel=info msg=\"  --proxy-connect-timeout='1'\" subsys=daemon\nlevel=info msg=\"  --proxy-prometheus-port='0'\" subsys=daemon\nlevel=info msg=\"  --read-cni-conf=''\" subsys=daemon\nlevel=info msg=\"  --restore='true'\" subsys=daemon\nlevel=info msg=\"  --sidecar-istio-proxy-image='cilium/istio_proxy'\" subsys=daemon\nlevel=info msg=\"  --single-cluster-route='false'\" subsys=daemon\nlevel=info msg=\"  --skip-crd-creation='false'\" subsys=daemon\nlevel=info msg=\"  --socket-path='/var/run/cilium/cilium.sock'\" subsys=daemon\nlevel=info msg=\"  --sockops-enable='false'\" subsys=daemon\nlevel=info msg=\"  --state-dir='/var/run/cilium'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-dns-reject-response-code='refused'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-enable-dns-compression='true'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-endpoint-max-ip-per-hostname='50'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-idle-connection-grace-period='0s'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-max-deferred-connection-deletes='10000'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-min-ttl='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-pre-cache=''\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-port='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-response-max-delay='100ms'\" subsys=daemon\nlevel=info msg=\"  --trace-payloadlen='128'\" subsys=daemon\nlevel=info msg=\"  --tunnel='vxlan'\" subsys=daemon\nlevel=info msg=\"  --version='false'\" subsys=daemon\nlevel=info msg=\"  --write-cni-conf-when-ready=''\" subsys=daemon\nlevel=info msg=\"     _ _ _\" subsys=daemon\nlevel=info msg=\" ___|_| |_|_ _ _____\" subsys=daemon\nlevel=info msg=\"|  _| | | | | |     |\" subsys=daemon\nlevel=info msg=\"|___|_|_|_|___|_|_|_|\" subsys=daemon\nlevel=info msg=\"Cilium 1.9.7 f993696 2021-05-12T18:21:30-07:00 go version go1.15.12 linux/amd64\" subsys=daemon\nlevel=info msg=\"cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL\" subsys=daemon\nlevel=info msg=\"clang (10.0.0) and kernel (4.19.0) versions: OK!\" subsys=linux-datapath\nlevel=info msg=\"linking environment: OK!\" subsys=linux-datapath\nlevel=info msg=\"Detected mounted BPF filesystem at /sys/fs/bpf\" subsys=bpf\nlevel=info msg=\"Parsing base label prefixes from default label list\" subsys=labels-filter\nlevel=info msg=\"Parsing additional label prefixes from user inputs: []\" subsys=labels-filter\nlevel=info msg=\"Final label prefixes to be used for identity evaluation:\" subsys=labels-filter\nlevel=info msg=\" - reserved:.*\" subsys=labels-filter\nlevel=info msg=\" - :io.kubernetes.pod.namespace\" subsys=labels-filter\nlevel=info msg=\" - :io.cilium.k8s.namespace.labels\" subsys=labels-filter\nlevel=info msg=\" - :app.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:io.kubernetes\" subsys=labels-filter\nlevel=info msg=\" - !:kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:.*beta.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:k8s.io\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-generation\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-hash\" subsys=labels-filter\nlevel=info msg=\" - !:controller-revision-hash\" subsys=labels-filter\nlevel=info msg=\" - !:annotation.*\" subsys=labels-filter\nlevel=info msg=\" - !:etcd_node\" subsys=labels-filter\nlevel=info msg=\"Using autogenerated IPv4 allocation range\" subsys=node v4Prefix=10.44.0.0/16\nlevel=info msg=\"Initializing daemon\" subsys=daemon\nlevel=info msg=\"Establishing connection to apiserver\" host=\"https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:443\" subsys=k8s\nlevel=info msg=\"Connected to apiserver\" subsys=k8s\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Trying to auto-enable \\\"enable-node-port\\\", \\\"enable-external-ips\\\", \\\"enable-host-reachable-services\\\", \\\"enable-host-port\\\", \\\"enable-session-affinity\\\" features\" subsys=daemon\nlevel=warning msg=\"Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.\" subsys=daemon\nlevel=info msg=\"BPF host routing is only available in native routing mode. Falling back to legacy host routing (enable-host-legacy-routing=true).\" subsys=daemon\nlevel=info msg=\"Inheriting MTU from external network interface\" device=ens5 ipAddr=172.20.56.44 mtu=9001 subsys=mtu\nlevel=info msg=\"Restored services from maps\" failed=0 restored=0 subsys=service\nlevel=info msg=\"Reading old endpoints...\" subsys=daemon\nlevel=info msg=\"No old endpoints found.\" subsys=daemon\nlevel=info msg=\"Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock\" subsys=envoy-manager\nlevel=error msg=\"Command execution failed\" cmd=\"[iptables -t mangle -n -L CILIUM_PRE_mangle]\" error=\"exit status 1\" subsys=iptables\nlevel=warning msg=\"# Warning: iptables-legacy tables present, use iptables-legacy to see them\" subsys=iptables\nlevel=warning msg=\"iptables: No chain/target/match by that name.\" subsys=iptables\nlevel=info msg=\"Waiting until all Cilium CRDs are available\" subsys=k8s\nlevel=info msg=\"All Cilium CRDs have been found and are available\" subsys=k8s\nlevel=info msg=\"Retrieved node information from kubernetes node\" nodeName=ip-172-20-56-44.ap-southeast-1.compute.internal subsys=k8s\nlevel=info msg=\"Received own node information from API server\" ipAddr.ipv4=172.20.56.44 ipAddr.ipv6=\"<nil>\" k8sNodeIP=172.20.56.44 labels=\"map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-44.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a]\" nodeName=ip-172-20-56-44.ap-southeast-1.compute.internal subsys=k8s v4Prefix=100.96.3.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"k8s mode: Allowing localhost to reach local endpoints\" subsys=daemon\nlevel=info msg=\"Using auto-derived devices for BPF node port\" devices=\"[ens5]\" directRoutingDevice=ens5 subsys=daemon\nlevel=info msg=\"Enabling k8s event listener\" subsys=k8s-watcher\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Waiting until all pre-existing resources related to policy have been received\" subsys=k8s-watcher\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Removing stale endpoint interfaces\" subsys=daemon\nlevel=info msg=\"Skipping kvstore configuration\" subsys=daemon\nlevel=info msg=\"Initializing node addressing\" subsys=daemon\nlevel=info msg=\"Initializing kubernetes IPAM\" subsys=ipam v4Prefix=100.96.3.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"Restoring endpoints...\" subsys=daemon\nlevel=info msg=\"Endpoints restored\" failed=0 restored=0 subsys=daemon\nlevel=info msg=\"Addressing information:\" subsys=daemon\nlevel=info msg=\"  Cluster-Name: default\" subsys=daemon\nlevel=info msg=\"  Cluster-ID: 0\" subsys=daemon\nlevel=info msg=\"  Local node-name: ip-172-20-56-44.ap-southeast-1.compute.internal\" subsys=daemon\nlevel=info msg=\"  Node-IPv6: <nil>\" subsys=daemon\nlevel=info msg=\"  External-Node IPv4: 172.20.56.44\" subsys=daemon\nlevel=info msg=\"  Internal-Node IPv4: 100.96.3.3\" subsys=daemon\nlevel=info msg=\"  IPv4 allocation prefix: 100.96.3.0/24\" subsys=daemon\nlevel=info msg=\"  Loopback IPv4: 169.254.42.1\" subsys=daemon\nlevel=info msg=\"  Local IPv4 addresses:\" subsys=daemon\nlevel=info msg=\"  - 172.20.56.44\" subsys=daemon\nlevel=info msg=\"Creating or updating CiliumNode resource\" node=ip-172-20-56-44.ap-southeast-1.compute.internal subsys=nodediscovery\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Adding local node to cluster\" node=\"{ip-172-20-56-44.ap-southeast-1.compute.internal default [{ExternalIP 13.228.203.244} {InternalIP 172.20.56.44} {CiliumInternalIP 100.96.3.3}] 100.96.3.0/24 <nil> 100.96.3.212 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-44.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] 6}\" subsys=nodediscovery\nlevel=info msg=\"Successfully created CiliumNode resource\" subsys=nodediscovery\nlevel=info msg=\"Annotating k8s node\" subsys=daemon v4CiliumHostIP.IPv4=100.96.3.3 v4Prefix=100.96.3.0/24 v4healthIP.IPv4=100.96.3.212 v6CiliumHostIP.IPv6=\"<nil>\" v6Prefix=\"<nil>\" v6healthIP.IPv6=\"<nil>\"\nlevel=info msg=\"Initializing identity allocator\" subsys=identity-cache\nlevel=info msg=\"Cluster-ID is not specified, skipping ClusterMesh initialization\" subsys=daemon\nlevel=info msg=\"Setting up BPF datapath\" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0\nlevel=info msg=\"All pre-existing resources related to policy have been received; continuing\" subsys=k8s-watcher\nlevel=info msg=\"Adding new proxy port rules for cilium-dns-egress:45277\" proxy port name=cilium-dns-egress subsys=proxy\nlevel=info msg=\"Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock\" subsys=monitor-agent\nlevel=info msg=\"Validating configured node address ranges\" subsys=daemon\nlevel=info msg=\"Starting connection tracking garbage collector\" subsys=daemon\nlevel=info msg=\"Starting IP identity watcher\" subsys=ipcache\nlevel=info msg=\"Initial scan of connection tracking completed\" subsys=ct-gc\nlevel=info msg=\"Regenerating restored endpoints\" numRestored=0 subsys=daemon\nlevel=info msg=\"Datapath signal listener running\" subsys=signal\nlevel=info msg=\"Creating host endpoint\" subsys=daemon\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2109 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2109 identityLabels=\"k8s:kops.k8s.io/instancegroup=nodes-ap-southeast-1a,k8s:node-role.kubernetes.io/node,k8s:node.kubernetes.io/instance-type=t3.medium,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2109 identity=1 identityLabels=\"k8s:kops.k8s.io/instancegroup=nodes-ap-southeast-1a,k8s:node-role.kubernetes.io/node,k8s:node.kubernetes.io/instance-type=t3.medium,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Launching Cilium health daemon\" subsys=daemon\nlevel=info msg=\"Finished regenerating restored endpoints\" regenerated=0 subsys=daemon total=0\nlevel=info msg=\"Launching Cilium health endpoint\" subsys=daemon\nlevel=info msg=\"Started healthz status API server\" address=\"127.0.0.1:9876\" subsys=daemon\nlevel=info msg=\"Initializing Cilium API\" subsys=daemon\nlevel=info msg=\"Daemon initialization completed\" bootstrapTime=8.342709744s subsys=daemon\nlevel=info msg=\"Serving cilium API at unix:///var/run/cilium/cilium.sock\" subsys=daemon\nlevel=info msg=\"Hubble server is disabled\" subsys=hubble\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2555 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2555 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2555 identity=4 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.334168895s file-path=/var/run/cilium/state/templates/e8efb37136cff38867c92c29f2de7dbc60c81d61/bpf_host.o subsys=datapath-loader\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.675761445s file-path=/var/run/cilium/state/templates/66892438269427be5bd121b4ef9cc55533686603/bpf_lxc.o subsys=datapath-loader\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2109 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2555 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Serving cilium health API at unix:///var/run/cilium/health.sock\" subsys=health-server\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=03038295-c04d-11eb-b220-0611bff56d72\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=03038295-c04d-11eb-b220-0611bff56d72 waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.3.109 0300731b-c04d-11eb-b220-0611bff56d72  }\" containerID=85cde5f9c11300d92dc76b3c4ed6389964266eadc4474d9799103221ca7f386d datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc2aae9d15f126 k8sPodName=kube-system/coredns-f45c4bf76-hwsqm labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=589 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=589 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0,k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops,k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Reserved new local key\" key=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io;k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0;k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops;k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=coredns;k8s:io.kubernetes.pod.namespace=kube-system;k8s:k8s-app=kube-dns;\" subsys=allocator\nlevel=info msg=\"Reusing existing global key\" key=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io;k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0;k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops;k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=coredns;k8s:io.kubernetes.pod.namespace=kube-system;k8s:k8s-app=kube-dns;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=589 identity=14688 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0,k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops,k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=589 identity=14688 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=589 identity=14688 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=589 identity=14688 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=239.530203ms subsys=rate totalDuration=239.581135ms uuid=03038295-c04d-11eb-b220-0611bff56d72 waitDurationTotal=0s\nlevel=warning msg=\"Unable to update ipcache map entry on pod add\" error=\"ipcache entry for podIP 100.96.3.109 owned by kvstore or agent\" hostIP=100.96.3.109 k8sNamespace=kube-system k8sPodName=coredns-f45c4bf76-hwsqm podIP=100.96.3.109 podIPs=\"[{100.96.3.109}]\" subsys=k8s-watcher\nlevel=warning msg=\"Unable to update ipcache map entry on pod add\" error=\"ipcache entry for podIP 100.96.3.109 owned by kvstore or agent\" hostIP=100.96.3.109 k8sNamespace=kube-system k8sPodName=coredns-f45c4bf76-hwsqm podIP=100.96.3.109 podIPs=\"[{100.96.3.109}]\" subsys=k8s-watcher\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=18 rateLimiterSkipped=true subsys=rate uuid=72773bdb-c04d-11eb-b220-0611bff56d72\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=18 rateLimiterSkipped=true subsys=rate uuid=72773bdb-c04d-11eb-b220-0611bff56d72 waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.3.64 72717d18-c04d-11eb-b220-0611bff56d72  }\" containerID=64d3c54414a8c628d49ddbcf8733d772c7a86ba63a3784c5a5b33db46b0d011d datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc4783412496a8 k8sPodName=deployment-962/test-recreate-deployment-6cb8b65c46-mdntx labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=deployment,k8s:io.cilium.k8s.namespace.labels.e2e-run=6e95d24d-7e56-474e-a462-c249b66e085d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=deployment-962,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=deployment-962,k8s:name=sample-pod-3\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:deployment k8s:io.cilium.k8s.namespace.labels.e2e-run:6e95d24d-7e56-474e-a462-c249b66e085d k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:deployment-962]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=deployment;k8s:io.cilium.k8s.namespace.labels.e2e-run=6e95d24d-7e56-474e-a462-c249b66e085d;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=deployment-962;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=deployment-962;k8s:name=sample-pod-3;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1 identity=44974 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=deployment,k8s:io.cilium.k8s.namespace.labels.e2e-run=6e95d24d-7e56-474e-a462-c249b66e085d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=deployment-962,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=deployment-962,k8s:name=sample-pod-3\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1 identity=44974 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1 identity=44974 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=1 identity=44974 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=287.153658ms subsys=rate totalDuration=287.246833ms uuid=72773bdb-c04d-11eb-b220-0611bff56d72 waitDurationTotal=0s\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=17 rateLimiterSkipped=true subsys=rate uuid=732d5507-c04d-11eb-b220-0611bff56d72\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=17 rateLimiterSkipped=true subsys=rate uuid=732d5507-c04d-11eb-b220-0611bff56d72 waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.3.246 7329cf27-c04d-11eb-b220-0611bff56d72  }\" containerID=9419210c846994579794d0a82631534ba5a89e24fb2d94a01f9364ab5fa04767 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc9596dcb33614 k8sPodName=hostpath-1895/pod-host-path-test labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2514 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2514 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=hostpath,k8s:io.cilium.k8s.namespace.labels.e2e-run=0929d066-3c0a-4dd6-9f5d-c5e095a01abe,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=hostpath-1895,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=hostpath-1895\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:hostpath k8s:io.cilium.k8s.namespace.labels.e2e-run:0929d066-3c0a-4dd6-9f5d-c5e095a01abe k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:hostpath-1895]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=hostpath;k8s:io.cilium.k8s.namespace.labels.e2e-run=0929d066-3c0a-4dd6-9f5d-c5e095a01abe;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=hostpath-1895;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=hostpath-1895;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2514 identity=10417 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=hostpath,k8s:io.cilium.k8s.namespace.labels.e2e-run=0929d066-3c0a-4dd6-9f5d-c5e095a01abe,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=hostpath-1895,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=hostpath-1895\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2514 identity=10417 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2514 identity=10417 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=2514 identity=10417 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=249.152622ms subsys=rate totalDuration=249.239244ms uuid=732d5507-c04d-11eb-b220-0611bff56d72 waitDurationTotal=0s\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=17 rateLimiterSkipped=true subsys=rate uuid=73861d6d-c04d-11eb-b220-0611bff56d72\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=17 rateLimiterSkipped=true subsys=rate uuid=73861d6d-c04d-11eb-b220-0611bff56d72 waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.3.15 7384b876-c04d-11eb-b220-0611bff56d72  }\" containerID=1ce06be6fa79925a9949ca01f7c40bbd07846ae1c732135f8bf638f6975cc5e1 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc0e7e43d1927d k8sPodName=kubectl-6106/e2e-test-httpd-pod labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1125 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1125 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=kubectl,k8s:io.cilium.k8s.namespace.labels.e2e-run=7756cefd-08de-4923-895c-e6a3ba650eb1,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kubectl-6106,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=kubectl-6106,k8s:run=e2e-test-httpd-pod\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:kubectl k8s:io.cilium.k8s.namespace.labels.e2e-run:7756cefd-08de-4923-895c-e6a3ba650eb1 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:kubectl-6106]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=kubectl;k8s:io.cilium.k8s.namespace.labels.e2e-run=7756cefd-08de-4923-895c-e6a3ba650eb1;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kubectl-6106;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=kubectl-6106;k8s:run=e2e-test-httpd-pod;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1125 identity=50396 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=kubectl,k8s:io.cilium.k8s.namespace.labels.e2e-run=7756cefd-08de-4923-895c-e6a3ba650eb1,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kubectl-6106,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=kubectl-6106,k8s:run=e2e-test-httpd-pod\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1125 identity=50396 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1125 identity=50396 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=1125 identity=50396 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=234.044416ms subsys=rate totalDuration=234.128268ms uuid=73861d6d-c04d-11eb-b220-0611bff56d72 waitDurationTotal=0s\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason= subsys=endpoint-manager\n==== END logs for container cilium-agent of pod kube-system/cilium-ph5rg ====\n==== START logs for container clean-cilium-state of pod kube-system/cilium-wth8r ====\n==== END logs for container clean-cilium-state of pod kube-system/cilium-wth8r ====\n==== START logs for container cilium-agent of pod kube-system/cilium-wth8r ====\nlevel=info msg=\"Skipped reading configuration file\" reason=\"Config File \\\"ciliumd\\\" Not Found in \\\"[/root]\\\"\" subsys=config\nlevel=info msg=\"Started gops server\" address=\"127.0.0.1:9890\" subsys=daemon\nlevel=info msg=\"  --agent-health-port='9876'\" subsys=daemon\nlevel=info msg=\"  --agent-labels=''\" subsys=daemon\nlevel=info msg=\"  --allow-icmp-frag-needed='true'\" subsys=daemon\nlevel=info msg=\"  --allow-localhost='auto'\" subsys=daemon\nlevel=info msg=\"  --annotate-k8s-node='true'\" subsys=daemon\nlevel=info msg=\"  --api-rate-limit='map[]'\" subsys=daemon\nlevel=info msg=\"  --arping-refresh-period='5m0s'\" subsys=daemon\nlevel=info msg=\"  --auto-create-cilium-node-resource='true'\" subsys=daemon\nlevel=info msg=\"  --auto-direct-node-routes='false'\" subsys=daemon\nlevel=info msg=\"  --blacklist-conflicting-routes='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-compile-debug='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-any-max='262144'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-tcp-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-fin='10s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-syn='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-fragments-map-max='8192'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-table-size='16381'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-map-max='65536'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --bpf-map-dynamic-size-ratio='0'\" subsys=daemon\nlevel=info msg=\"  --bpf-nat-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-neigh-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-policy-map-max='16384'\" subsys=daemon\nlevel=info msg=\"  --bpf-root=''\" subsys=daemon\nlevel=info msg=\"  --bpf-sock-rev-map-max='262144'\" subsys=daemon\nlevel=info msg=\"  --certificates-directory='/var/run/cilium/certs'\" subsys=daemon\nlevel=info msg=\"  --cgroup-root=''\" subsys=daemon\nlevel=info msg=\"  --cluster-id='0'\" subsys=daemon\nlevel=info msg=\"  --cluster-name='default'\" subsys=daemon\nlevel=info msg=\"  --clustermesh-config='/var/lib/cilium/clustermesh/'\" subsys=daemon\nlevel=info msg=\"  --cmdref=''\" subsys=daemon\nlevel=info msg=\"  --config=''\" subsys=daemon\nlevel=info msg=\"  --config-dir='/tmp/cilium/config-map'\" subsys=daemon\nlevel=info msg=\"  --conntrack-gc-interval='0s'\" subsys=daemon\nlevel=info msg=\"  --crd-wait-timeout='5m0s'\" subsys=daemon\nlevel=info msg=\"  --datapath-mode='veth'\" subsys=daemon\nlevel=info msg=\"  --debug='false'\" subsys=daemon\nlevel=info msg=\"  --debug-verbose=''\" subsys=daemon\nlevel=info msg=\"  --device=''\" subsys=daemon\nlevel=info msg=\"  --devices=''\" subsys=daemon\nlevel=info msg=\"  --direct-routing-device=''\" subsys=daemon\nlevel=info msg=\"  --disable-cnp-status-updates='false'\" subsys=daemon\nlevel=info msg=\"  --disable-conntrack='false'\" subsys=daemon\nlevel=info msg=\"  --disable-endpoint-crd='false'\" subsys=daemon\nlevel=info msg=\"  --disable-envoy-version-check='false'\" subsys=daemon\nlevel=info msg=\"  --disable-iptables-feeder-rules=''\" subsys=daemon\nlevel=info msg=\"  --dns-max-ips-per-restored-rule='1000'\" subsys=daemon\nlevel=info msg=\"  --egress-masquerade-interfaces=''\" subsys=daemon\nlevel=info msg=\"  --egress-multi-home-ip-rule-compat='false'\" subsys=daemon\nlevel=info msg=\"  --enable-auto-protect-node-port-range='true'\" subsys=daemon\nlevel=info msg=\"  --enable-bandwidth-manager='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-clock-probe='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-masquerade='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-tproxy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-routes='false'\" subsys=daemon\nlevel=info msg=\"  --enable-external-ips='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-check-nodeport='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-firewall='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-legacy-routing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-reachable-services='false'\" subsys=daemon\nlevel=info msg=\"  --enable-hubble='false'\" subsys=daemon\nlevel=info msg=\"  --enable-identity-mark='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ip-masq-agent='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipsec='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4-fragment-tracking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6-ndp='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-api-discovery='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-endpoint-slice='true'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-event-handover='false'\" subsys=daemon\nlevel=info msg=\"  --enable-l7-proxy='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-node-route='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-redirect-policy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-monitor='true'\" subsys=daemon\nlevel=info msg=\"  --enable-node-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-policy='default'\" subsys=daemon\nlevel=info msg=\"  --enable-remote-node-identity='true'\" subsys=daemon\nlevel=info msg=\"  --enable-selective-regeneration='true'\" subsys=daemon\nlevel=info msg=\"  --enable-session-affinity='false'\" subsys=daemon\nlevel=info msg=\"  --enable-svc-source-range-check='true'\" subsys=daemon\nlevel=info msg=\"  --enable-tracing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-well-known-identities='true'\" subsys=daemon\nlevel=info msg=\"  --enable-xt-socket-fallback='true'\" subsys=daemon\nlevel=info msg=\"  --encrypt-interface=''\" subsys=daemon\nlevel=info msg=\"  --encrypt-node='false'\" subsys=daemon\nlevel=info msg=\"  --endpoint-interface-name-prefix='lxc+'\" subsys=daemon\nlevel=info msg=\"  --endpoint-queue-size='25'\" subsys=daemon\nlevel=info msg=\"  --endpoint-status=''\" subsys=daemon\nlevel=info msg=\"  --envoy-log=''\" subsys=daemon\nlevel=info msg=\"  --exclude-local-address=''\" subsys=daemon\nlevel=info msg=\"  --fixed-identity-mapping='map[]'\" subsys=daemon\nlevel=info msg=\"  --flannel-master-device=''\" subsys=daemon\nlevel=info msg=\"  --flannel-uninstall-on-exit='false'\" subsys=daemon\nlevel=info msg=\"  --force-local-policy-eval-at-source='true'\" subsys=daemon\nlevel=info msg=\"  --gops-port='9890'\" subsys=daemon\nlevel=info msg=\"  --host-reachable-services-protos='tcp,udp'\" subsys=daemon\nlevel=info msg=\"  --http-403-msg=''\" subsys=daemon\nlevel=info msg=\"  --http-idle-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-max-grpc-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-normalize-path='true'\" subsys=daemon\nlevel=info msg=\"  --http-request-timeout='3600'\" subsys=daemon\nlevel=info msg=\"  --http-retry-count='3'\" subsys=daemon\nlevel=info msg=\"  --http-retry-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-disable-tls='false'\" subsys=daemon\nlevel=info msg=\"  --hubble-event-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-flow-buffer-size='4095'\" subsys=daemon\nlevel=info msg=\"  --hubble-listen-address=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics-server=''\" subsys=daemon\nlevel=info msg=\"  --hubble-socket-path='/var/run/cilium/hubble.sock'\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-cert-file=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-client-ca-files=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-key-file=''\" subsys=daemon\nlevel=info msg=\"  --identity-allocation-mode='crd'\" subsys=daemon\nlevel=info msg=\"  --identity-change-grace-period='5s'\" subsys=daemon\nlevel=info msg=\"  --install-iptables-rules='true'\" subsys=daemon\nlevel=info msg=\"  --ip-allocation-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'\" subsys=daemon\nlevel=info msg=\"  --ipam='kubernetes'\" subsys=daemon\nlevel=info msg=\"  --ipsec-key-file=''\" subsys=daemon\nlevel=info msg=\"  --iptables-lock-timeout='5s'\" subsys=daemon\nlevel=info msg=\"  --iptables-random-fully='false'\" subsys=daemon\nlevel=info msg=\"  --ipv4-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv4-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-loopback-address='169.254.42.1'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-cluster-alloc-cidr='f00d::/64'\" subsys=daemon\nlevel=info msg=\"  --ipv6-mcast-device=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipvlan-master-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --join-cluster='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-api-server=''\" subsys=daemon\nlevel=info msg=\"  --k8s-force-json-patch='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-heartbeat-timeout='30s'\" subsys=daemon\nlevel=info msg=\"  --k8s-kubeconfig-path=''\" subsys=daemon\nlevel=info msg=\"  --k8s-namespace='kube-system'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv4-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv6-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-cache-size='128'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-proxy-name=''\" subsys=daemon\nlevel=info msg=\"  --k8s-sync-timeout='3m0s'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-queue-size='1024'\" subsys=daemon\nlevel=info msg=\"  --keep-config='false'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement='strict'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement-healthz-bind-address=''\" subsys=daemon\nlevel=info msg=\"  --kvstore=''\" subsys=daemon\nlevel=info msg=\"  --kvstore-connectivity-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-lease-ttl='15m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --kvstore-periodic-sync='5m0s'\" subsys=daemon\nlevel=info msg=\"  --label-prefix-file=''\" subsys=daemon\nlevel=info msg=\"  --labels=''\" subsys=daemon\nlevel=info msg=\"  --lib-dir='/var/lib/cilium'\" subsys=daemon\nlevel=info msg=\"  --log-driver=''\" subsys=daemon\nlevel=info msg=\"  --log-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --log-system-load='false'\" subsys=daemon\nlevel=info msg=\"  --masquerade='true'\" subsys=daemon\nlevel=info msg=\"  --max-controller-interval='0'\" subsys=daemon\nlevel=info msg=\"  --metrics=''\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation='medium'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-flags='syn,fin,rst'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-interval='5s'\" subsys=daemon\nlevel=info msg=\"  --monitor-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --mtu='0'\" subsys=daemon\nlevel=info msg=\"  --nat46-range='0:0:0:0:0:FFFF::/96'\" subsys=daemon\nlevel=info msg=\"  --native-routing-cidr=''\" subsys=daemon\nlevel=info msg=\"  --node-port-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --node-port-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --node-port-bind-protection='true'\" subsys=daemon\nlevel=info msg=\"  --node-port-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --node-port-range='30000,32767'\" subsys=daemon\nlevel=info msg=\"  --policy-audit-mode='false'\" subsys=daemon\nlevel=info msg=\"  --policy-queue-size='100'\" subsys=daemon\nlevel=info msg=\"  --policy-trigger-interval='1s'\" subsys=daemon\nlevel=info msg=\"  --pprof='false'\" subsys=daemon\nlevel=info msg=\"  --preallocate-bpf-maps='false'\" subsys=daemon\nlevel=info msg=\"  --prefilter-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --prefilter-mode='native'\" subsys=daemon\nlevel=info msg=\"  --prepend-iptables-chains='true'\" subsys=daemon\nlevel=info msg=\"  --prometheus-serve-addr=''\" subsys=daemon\nlevel=info msg=\"  --proxy-connect-timeout='1'\" subsys=daemon\nlevel=info msg=\"  --proxy-prometheus-port='0'\" subsys=daemon\nlevel=info msg=\"  --read-cni-conf=''\" subsys=daemon\nlevel=info msg=\"  --restore='true'\" subsys=daemon\nlevel=info msg=\"  --sidecar-istio-proxy-image='cilium/istio_proxy'\" subsys=daemon\nlevel=info msg=\"  --single-cluster-route='false'\" subsys=daemon\nlevel=info msg=\"  --skip-crd-creation='false'\" subsys=daemon\nlevel=info msg=\"  --socket-path='/var/run/cilium/cilium.sock'\" subsys=daemon\nlevel=info msg=\"  --sockops-enable='false'\" subsys=daemon\nlevel=info msg=\"  --state-dir='/var/run/cilium'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-dns-reject-response-code='refused'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-enable-dns-compression='true'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-endpoint-max-ip-per-hostname='50'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-idle-connection-grace-period='0s'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-max-deferred-connection-deletes='10000'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-min-ttl='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-pre-cache=''\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-port='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-response-max-delay='100ms'\" subsys=daemon\nlevel=info msg=\"  --trace-payloadlen='128'\" subsys=daemon\nlevel=info msg=\"  --tunnel='vxlan'\" subsys=daemon\nlevel=info msg=\"  --version='false'\" subsys=daemon\nlevel=info msg=\"  --write-cni-conf-when-ready=''\" subsys=daemon\nlevel=info msg=\"     _ _ _\" subsys=daemon\nlevel=info msg=\" ___|_| |_|_ _ _____\" subsys=daemon\nlevel=info msg=\"|  _| | | | | |     |\" subsys=daemon\nlevel=info msg=\"|___|_|_|_|___|_|_|_|\" subsys=daemon\nlevel=info msg=\"Cilium 1.9.7 f993696 2021-05-12T18:21:30-07:00 go version go1.15.12 linux/amd64\" subsys=daemon\nlevel=info msg=\"cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL\" subsys=daemon\nlevel=info msg=\"clang (10.0.0) and kernel (4.19.0) versions: OK!\" subsys=linux-datapath\nlevel=info msg=\"linking environment: OK!\" subsys=linux-datapath\nlevel=info msg=\"Detected mounted BPF filesystem at /sys/fs/bpf\" subsys=bpf\nlevel=info msg=\"Parsing base label prefixes from default label list\" subsys=labels-filter\nlevel=info msg=\"Parsing additional label prefixes from user inputs: []\" subsys=labels-filter\nlevel=info msg=\"Final label prefixes to be used for identity evaluation:\" subsys=labels-filter\nlevel=info msg=\" - reserved:.*\" subsys=labels-filter\nlevel=info msg=\" - :io.kubernetes.pod.namespace\" subsys=labels-filter\nlevel=info msg=\" - :io.cilium.k8s.namespace.labels\" subsys=labels-filter\nlevel=info msg=\" - :app.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:io.kubernetes\" subsys=labels-filter\nlevel=info msg=\" - !:kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:.*beta.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:k8s.io\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-generation\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-hash\" subsys=labels-filter\nlevel=info msg=\" - !:controller-revision-hash\" subsys=labels-filter\nlevel=info msg=\" - !:annotation.*\" subsys=labels-filter\nlevel=info msg=\" - !:etcd_node\" subsys=labels-filter\nlevel=info msg=\"Using autogenerated IPv4 allocation range\" subsys=node v4Prefix=10.213.0.0/16\nlevel=info msg=\"Initializing daemon\" subsys=daemon\nlevel=info msg=\"Establishing connection to apiserver\" host=\"https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:443\" subsys=k8s\nlevel=info msg=\"Connected to apiserver\" subsys=k8s\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Trying to auto-enable \\\"enable-node-port\\\", \\\"enable-external-ips\\\", \\\"enable-host-reachable-services\\\", \\\"enable-host-port\\\", \\\"enable-session-affinity\\\" features\" subsys=daemon\nlevel=warning msg=\"Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.\" subsys=daemon\nlevel=info msg=\"BPF host routing is only available in native routing mode. Falling back to legacy host routing (enable-host-legacy-routing=true).\" subsys=daemon\nlevel=info msg=\"Inheriting MTU from external network interface\" device=ens5 ipAddr=172.20.54.213 mtu=9001 subsys=mtu\nlevel=info msg=\"Restored services from maps\" failed=0 restored=0 subsys=service\nlevel=info msg=\"Reading old endpoints...\" subsys=daemon\nlevel=info msg=\"No old endpoints found.\" subsys=daemon\nlevel=info msg=\"Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock\" subsys=envoy-manager\nlevel=error msg=\"Command execution failed\" cmd=\"[iptables -t mangle -n -L CILIUM_PRE_mangle]\" error=\"exit status 1\" subsys=iptables\nlevel=warning msg=\"# Warning: iptables-legacy tables present, use iptables-legacy to see them\" subsys=iptables\nlevel=warning msg=\"iptables: No chain/target/match by that name.\" subsys=iptables\nlevel=info msg=\"Waiting until all Cilium CRDs are available\" subsys=k8s\nlevel=info msg=\"All Cilium CRDs have been found and are available\" subsys=k8s\nlevel=info msg=\"Retrieved node information from kubernetes node\" nodeName=ip-172-20-54-213.ap-southeast-1.compute.internal subsys=k8s\nlevel=info msg=\"Received own node information from API server\" ipAddr.ipv4=172.20.54.213 ipAddr.ipv6=\"<nil>\" k8sNodeIP=172.20.54.213 labels=\"map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-213.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a]\" nodeName=ip-172-20-54-213.ap-southeast-1.compute.internal subsys=k8s v4Prefix=100.96.2.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"k8s mode: Allowing localhost to reach local endpoints\" subsys=daemon\nlevel=info msg=\"Using auto-derived devices for BPF node port\" devices=\"[ens5]\" directRoutingDevice=ens5 subsys=daemon\nlevel=info msg=\"Enabling k8s event listener\" subsys=k8s-watcher\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Removing stale endpoint interfaces\" subsys=daemon\nlevel=info msg=\"Skipping kvstore configuration\" subsys=daemon\nlevel=info msg=\"Waiting until all pre-existing resources related to policy have been received\" subsys=k8s-watcher\nlevel=info msg=\"Initializing node addressing\" subsys=daemon\nlevel=info msg=\"Initializing kubernetes IPAM\" subsys=ipam v4Prefix=100.96.2.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"Restoring endpoints...\" subsys=daemon\nlevel=info msg=\"Endpoints restored\" failed=0 restored=0 subsys=daemon\nlevel=info msg=\"Addressing information:\" subsys=daemon\nlevel=info msg=\"  Cluster-Name: default\" subsys=daemon\nlevel=info msg=\"  Cluster-ID: 0\" subsys=daemon\nlevel=info msg=\"  Local node-name: ip-172-20-54-213.ap-southeast-1.compute.internal\" subsys=daemon\nlevel=info msg=\"  Node-IPv6: <nil>\" subsys=daemon\nlevel=info msg=\"  External-Node IPv4: 172.20.54.213\" subsys=daemon\nlevel=info msg=\"  Internal-Node IPv4: 100.96.2.222\" subsys=daemon\nlevel=info msg=\"  IPv4 allocation prefix: 100.96.2.0/24\" subsys=daemon\nlevel=info msg=\"  Loopback IPv4: 169.254.42.1\" subsys=daemon\nlevel=info msg=\"  Local IPv4 addresses:\" subsys=daemon\nlevel=info msg=\"  - 172.20.54.213\" subsys=daemon\nlevel=info msg=\"Adding local node to cluster\" node=\"{ip-172-20-54-213.ap-southeast-1.compute.internal default [{ExternalIP 54.255.203.155} {InternalIP 172.20.54.213} {CiliumInternalIP 100.96.2.222}] 100.96.2.0/24 <nil> 100.96.2.92 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-213.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] 6}\" subsys=nodediscovery\nlevel=info msg=\"Creating or updating CiliumNode resource\" node=ip-172-20-54-213.ap-southeast-1.compute.internal subsys=nodediscovery\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Successfully created CiliumNode resource\" subsys=nodediscovery\nlevel=info msg=\"Annotating k8s node\" subsys=daemon v4CiliumHostIP.IPv4=100.96.2.222 v4Prefix=100.96.2.0/24 v4healthIP.IPv4=100.96.2.92 v6CiliumHostIP.IPv6=\"<nil>\" v6Prefix=\"<nil>\" v6healthIP.IPv6=\"<nil>\"\nlevel=info msg=\"Initializing identity allocator\" subsys=identity-cache\nlevel=info msg=\"Cluster-ID is not specified, skipping ClusterMesh initialization\" subsys=daemon\nlevel=info msg=\"Setting up BPF datapath\" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0\nlevel=info msg=\"All pre-existing resources related to policy have been received; continuing\" subsys=k8s-watcher\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Adding new proxy port rules for cilium-dns-egress:41377\" proxy port name=cilium-dns-egress subsys=proxy\nlevel=info msg=\"Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock\" subsys=monitor-agent\nlevel=info msg=\"Validating configured node address ranges\" subsys=daemon\nlevel=info msg=\"Starting connection tracking garbage collector\" subsys=daemon\nlevel=info msg=\"Starting IP identity watcher\" subsys=ipcache\nlevel=info msg=\"Initial scan of connection tracking completed\" subsys=ct-gc\nlevel=info msg=\"Regenerating restored endpoints\" numRestored=0 subsys=daemon\nlevel=info msg=\"Datapath signal listener running\" subsys=signal\nlevel=info msg=\"Creating host endpoint\" subsys=daemon\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1086 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1086 identityLabels=\"k8s:kops.k8s.io/instancegroup=nodes-ap-southeast-1a,k8s:node-role.kubernetes.io/node,k8s:node.kubernetes.io/instance-type=t3.medium,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1086 identity=1 identityLabels=\"k8s:kops.k8s.io/instancegroup=nodes-ap-southeast-1a,k8s:node-role.kubernetes.io/node,k8s:node.kubernetes.io/instance-type=t3.medium,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Launching Cilium health daemon\" subsys=daemon\nlevel=info msg=\"Finished regenerating restored endpoints\" regenerated=0 subsys=daemon total=0\nlevel=info msg=\"Launching Cilium health endpoint\" subsys=daemon\nlevel=info msg=\"Started healthz status API server\" address=\"127.0.0.1:9876\" subsys=daemon\nlevel=info msg=\"Initializing Cilium API\" subsys=daemon\nlevel=info msg=\"Daemon initialization completed\" bootstrapTime=8.988486144s subsys=daemon\nlevel=info msg=\"Serving cilium API at unix:///var/run/cilium/cilium.sock\" subsys=daemon\nlevel=info msg=\"Hubble server is disabled\" subsys=hubble\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=11 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=11 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=11 identity=4 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.42253292s file-path=/var/run/cilium/state/templates/a388ff5a820abbd0a47a0108e8df1f5724eef4cc/bpf_host.o subsys=datapath-loader\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.701395422s file-path=/var/run/cilium/state/templates/4d1a562b399949c9517424e284476059a89d25b1/bpf_lxc.o subsys=datapath-loader\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1086 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=11 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Serving cilium health API at unix:///var/run/cilium/health.sock\" subsys=health-server\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=7267bc25-c04d-11eb-a8c4-06ad0a621d1c\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=7267bc25-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.2.12 72658817-c04d-11eb-a8c4-06ad0a621d1c  }\" containerID=b117ba4923e132c248480d6bb819ae825a53ea5a01e349e0f5561e5891a6c1c3 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc0c6986f7faf5 k8sPodName=svcaccounts-3369/test-pod-2a575c69-03fc-49da-b9ba-6af4f554b5b4 labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=764 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=764 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=svcaccounts,k8s:io.cilium.k8s.namespace.labels.e2e-run=f378111e-83d7-415f-9497-22f6dd7094ca,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=svcaccounts-3369,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=svcaccounts-3369\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:svcaccounts k8s:io.cilium.k8s.namespace.labels.e2e-run:f378111e-83d7-415f-9497-22f6dd7094ca k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:svcaccounts-3369]\" subsys=crd-allocator\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Invalid state transition skipped\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=764 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=544 subsys=endpoint\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=svcaccounts;k8s:io.cilium.k8s.namespace.labels.e2e-run=f378111e-83d7-415f-9497-22f6dd7094ca;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=svcaccounts-3369;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=svcaccounts-3369;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=764 identity=25370 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=svcaccounts,k8s:io.cilium.k8s.namespace.labels.e2e-run=f378111e-83d7-415f-9497-22f6dd7094ca,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=svcaccounts-3369,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=svcaccounts-3369\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=764 identity=25370 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=72a53bdb-c04d-11eb-a8c4-06ad0a621d1c\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=72a53bdb-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.2.90 729c8e16-c04d-11eb-a8c4-06ad0a621d1c  }\" containerID=bff1f0973f0733e862dd8b4f32ee1fa6bc02b01a01483d76ffadda3bfd6e2fc0 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc545a1f6f9745 k8sPodName=container-runtime-6042/image-pull-testc62c79cb-7ed1-4711-9ae6-99bae888f316 labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=4065 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=4065 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=container-runtime,k8s:io.cilium.k8s.namespace.labels.e2e-run=4f35fa08-92ef-442a-ab2d-de70222d88d4,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=container-runtime-6042,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=container-runtime-6042\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:container-runtime k8s:io.cilium.k8s.namespace.labels.e2e-run:4f35fa08-92ef-442a-ab2d-de70222d88d4 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:container-runtime-6042]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=container-runtime;k8s:io.cilium.k8s.namespace.labels.e2e-run=4f35fa08-92ef-442a-ab2d-de70222d88d4;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=container-runtime-6042;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=container-runtime-6042;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=4065 identity=4400 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=container-runtime,k8s:io.cilium.k8s.namespace.labels.e2e-run=4f35fa08-92ef-442a-ab2d-de70222d88d4,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=container-runtime-6042,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=container-runtime-6042\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=4065 identity=4400 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=72e0bb4a-c04d-11eb-a8c4-06ad0a621d1c\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=72e0bb4a-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.2.151 72e08ae0-c04d-11eb-a8c4-06ad0a621d1c  }\" containerID=34ded2cb9c35ef6efc1ffd7735ceacb4d6e7a202a652cd12765d4f3a6bbbc69f datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc885a5800f7c3 k8sPodName=services-3817/service-headless-fpb8q labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3502 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3502 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services,k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=services-3817,k8s:name=service-headless\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Reserved new local key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services;k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=services-3817;k8s:name=service-headless;\" subsys=allocator\nlevel=info msg=\"Reusing existing global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services;k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=services-3817;k8s:name=service-headless;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3502 identity=36581 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services,k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=services-3817,k8s:name=service-headless\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3502 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=764 identity=25370 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=764 identity=25370 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=1.058171997s subsys=rate totalDuration=1.058231655s uuid=7267bc25-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=0s\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=4065 identity=4400 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=4065 identity=4400 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=732.090483ms subsys=rate totalDuration=732.190497ms uuid=72a53bdb-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=0s\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3502 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=3502 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=677.103759ms subsys=rate totalDuration=679.137545ms uuid=72e0bb4a-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=0s\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=6 rateLimiterSkipped=true subsys=rate uuid=73494b72-c04d-11eb-a8c4-06ad0a621d1c\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=6 rateLimiterSkipped=true subsys=rate uuid=73494b72-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.2.23 73455a5b-c04d-11eb-a8c4-06ad0a621d1c  }\" containerID=4335a6e60e4a244e1b03008b87247ebb017d47a78a236a1fecdd1052ae11e628 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxca10a1a759c74 k8sPodName=services-9561/hairpin labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2186 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2186 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services,k8s:io.cilium.k8s.namespace.labels.e2e-run=e674a1e1-3883-43f4-a979-70dba09af979,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-9561,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=services-9561,k8s:testid=hairpin-test-0330ee0c-00ad-4f91-af65-0870f905ce03\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:services k8s:io.cilium.k8s.namespace.labels.e2e-run:e674a1e1-3883-43f4-a979-70dba09af979 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:services-9561]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services;k8s:io.cilium.k8s.namespace.labels.e2e-run=e674a1e1-3883-43f4-a979-70dba09af979;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-9561;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=services-9561;k8s:testid=hairpin-test-0330ee0c-00ad-4f91-af65-0870f905ce03;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2186 identity=2707 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services,k8s:io.cilium.k8s.namespace.labels.e2e-run=e674a1e1-3883-43f4-a979-70dba09af979,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-9561,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=services-9561,k8s:testid=hairpin-test-0330ee0c-00ad-4f91-af65-0870f905ce03\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2186 identity=2707 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2186 identity=2707 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=2186 identity=2707 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=308.127953ms subsys=rate totalDuration=308.212879ms uuid=73494b72-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=0s\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=7 subsys=rate uuid=740ebd6b-c04d-11eb-a8c4-06ad0a621d1c\nlevel=info msg=\"API request released by rate limiter\" burst=8 limit=1.44/s maxWaitDuration=15s maxWaitDurationLimiter=14.999917958s name=endpoint-create parallelRequests=7 subsys=rate uuid=740ebd6b-c04d-11eb-a8c4-06ad0a621d1c waitDurationLimiter=0s waitDurationTotal=\"103.32µs\"\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.2.78 740982fb-c04d-11eb-a8c4-06ad0a621d1c  }\" containerID=5e1735ef47840331a6eac048bbd3635b65e3113a0b2e4b7c535481043bc8ff2e datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc7d338ddf78f9 k8sPodName=containers-5341/client-containers-51af7210-dcc1-4fea-98a8-94e6741da4ff labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=28 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=28 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=containers,k8s:io.cilium.k8s.namespace.labels.e2e-run=db5580ea-fc4a-4a0e-bc41-38af989938b4,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=containers-5341,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=containers-5341\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:containers k8s:io.cilium.k8s.namespace.labels.e2e-run:db5580ea-fc4a-4a0e-bc41-38af989938b4 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:containers-5341]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=containers;k8s:io.cilium.k8s.namespace.labels.e2e-run=db5580ea-fc4a-4a0e-bc41-38af989938b4;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=containers-5341;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=containers-5341;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=28 identity=7141 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=containers,k8s:io.cilium.k8s.namespace.labels.e2e-run=db5580ea-fc4a-4a0e-bc41-38af989938b4,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=containers-5341,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=containers-5341\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=28 identity=7141 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=28 identity=7141 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=28 identity=7141 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=236.094573ms subsys=rate totalDuration=236.225851ms uuid=740ebd6b-c04d-11eb-a8c4-06ad0a621d1c waitDurationTotal=\"103.32µs\"\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"regenerating all endpoints\" reason= subsys=endpoint-manager\n==== END logs for container cilium-agent of pod kube-system/cilium-wth8r ====\n==== START logs for container clean-cilium-state of pod kube-system/cilium-wzxqf ====\n==== END logs for container clean-cilium-state of pod kube-system/cilium-wzxqf ====\n==== START logs for container cilium-agent of pod kube-system/cilium-wzxqf ====\nlevel=info msg=\"Skipped reading configuration file\" reason=\"Config File \\\"ciliumd\\\" Not Found in \\\"[/root]\\\"\" subsys=config\nlevel=info msg=\"Started gops server\" address=\"127.0.0.1:9890\" subsys=daemon\nlevel=info msg=\"  --agent-health-port='9876'\" subsys=daemon\nlevel=info msg=\"  --agent-labels=''\" subsys=daemon\nlevel=info msg=\"  --allow-icmp-frag-needed='true'\" subsys=daemon\nlevel=info msg=\"  --allow-localhost='auto'\" subsys=daemon\nlevel=info msg=\"  --annotate-k8s-node='true'\" subsys=daemon\nlevel=info msg=\"  --api-rate-limit='map[]'\" subsys=daemon\nlevel=info msg=\"  --arping-refresh-period='5m0s'\" subsys=daemon\nlevel=info msg=\"  --auto-create-cilium-node-resource='true'\" subsys=daemon\nlevel=info msg=\"  --auto-direct-node-routes='false'\" subsys=daemon\nlevel=info msg=\"  --blacklist-conflicting-routes='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-compile-debug='false'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-any-max='262144'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-global-tcp-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-fin='10s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-regular-tcp-syn='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-any='1m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-ct-timeout-service-tcp='6h0m0s'\" subsys=daemon\nlevel=info msg=\"  --bpf-fragments-map-max='8192'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-maglev-table-size='16381'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-map-max='65536'\" subsys=daemon\nlevel=info msg=\"  --bpf-lb-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --bpf-map-dynamic-size-ratio='0'\" subsys=daemon\nlevel=info msg=\"  --bpf-nat-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-neigh-global-max='524288'\" subsys=daemon\nlevel=info msg=\"  --bpf-policy-map-max='16384'\" subsys=daemon\nlevel=info msg=\"  --bpf-root=''\" subsys=daemon\nlevel=info msg=\"  --bpf-sock-rev-map-max='262144'\" subsys=daemon\nlevel=info msg=\"  --certificates-directory='/var/run/cilium/certs'\" subsys=daemon\nlevel=info msg=\"  --cgroup-root=''\" subsys=daemon\nlevel=info msg=\"  --cluster-id='0'\" subsys=daemon\nlevel=info msg=\"  --cluster-name='default'\" subsys=daemon\nlevel=info msg=\"  --clustermesh-config='/var/lib/cilium/clustermesh/'\" subsys=daemon\nlevel=info msg=\"  --cmdref=''\" subsys=daemon\nlevel=info msg=\"  --config=''\" subsys=daemon\nlevel=info msg=\"  --config-dir='/tmp/cilium/config-map'\" subsys=daemon\nlevel=info msg=\"  --conntrack-gc-interval='0s'\" subsys=daemon\nlevel=info msg=\"  --crd-wait-timeout='5m0s'\" subsys=daemon\nlevel=info msg=\"  --datapath-mode='veth'\" subsys=daemon\nlevel=info msg=\"  --debug='false'\" subsys=daemon\nlevel=info msg=\"  --debug-verbose=''\" subsys=daemon\nlevel=info msg=\"  --device=''\" subsys=daemon\nlevel=info msg=\"  --devices=''\" subsys=daemon\nlevel=info msg=\"  --direct-routing-device=''\" subsys=daemon\nlevel=info msg=\"  --disable-cnp-status-updates='false'\" subsys=daemon\nlevel=info msg=\"  --disable-conntrack='false'\" subsys=daemon\nlevel=info msg=\"  --disable-endpoint-crd='false'\" subsys=daemon\nlevel=info msg=\"  --disable-envoy-version-check='false'\" subsys=daemon\nlevel=info msg=\"  --disable-iptables-feeder-rules=''\" subsys=daemon\nlevel=info msg=\"  --dns-max-ips-per-restored-rule='1000'\" subsys=daemon\nlevel=info msg=\"  --egress-masquerade-interfaces=''\" subsys=daemon\nlevel=info msg=\"  --egress-multi-home-ip-rule-compat='false'\" subsys=daemon\nlevel=info msg=\"  --enable-auto-protect-node-port-range='true'\" subsys=daemon\nlevel=info msg=\"  --enable-bandwidth-manager='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-clock-probe='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-masquerade='false'\" subsys=daemon\nlevel=info msg=\"  --enable-bpf-tproxy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-endpoint-routes='false'\" subsys=daemon\nlevel=info msg=\"  --enable-external-ips='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-check-nodeport='true'\" subsys=daemon\nlevel=info msg=\"  --enable-health-checking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-firewall='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-legacy-routing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-host-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-host-reachable-services='false'\" subsys=daemon\nlevel=info msg=\"  --enable-hubble='false'\" subsys=daemon\nlevel=info msg=\"  --enable-identity-mark='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ip-masq-agent='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipsec='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv4-fragment-tracking='true'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6='false'\" subsys=daemon\nlevel=info msg=\"  --enable-ipv6-ndp='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-api-discovery='false'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-endpoint-slice='true'\" subsys=daemon\nlevel=info msg=\"  --enable-k8s-event-handover='false'\" subsys=daemon\nlevel=info msg=\"  --enable-l7-proxy='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-node-route='true'\" subsys=daemon\nlevel=info msg=\"  --enable-local-redirect-policy='false'\" subsys=daemon\nlevel=info msg=\"  --enable-monitor='true'\" subsys=daemon\nlevel=info msg=\"  --enable-node-port='true'\" subsys=daemon\nlevel=info msg=\"  --enable-policy='default'\" subsys=daemon\nlevel=info msg=\"  --enable-remote-node-identity='true'\" subsys=daemon\nlevel=info msg=\"  --enable-selective-regeneration='true'\" subsys=daemon\nlevel=info msg=\"  --enable-session-affinity='false'\" subsys=daemon\nlevel=info msg=\"  --enable-svc-source-range-check='true'\" subsys=daemon\nlevel=info msg=\"  --enable-tracing='false'\" subsys=daemon\nlevel=info msg=\"  --enable-well-known-identities='true'\" subsys=daemon\nlevel=info msg=\"  --enable-xt-socket-fallback='true'\" subsys=daemon\nlevel=info msg=\"  --encrypt-interface=''\" subsys=daemon\nlevel=info msg=\"  --encrypt-node='false'\" subsys=daemon\nlevel=info msg=\"  --endpoint-interface-name-prefix='lxc+'\" subsys=daemon\nlevel=info msg=\"  --endpoint-queue-size='25'\" subsys=daemon\nlevel=info msg=\"  --endpoint-status=''\" subsys=daemon\nlevel=info msg=\"  --envoy-log=''\" subsys=daemon\nlevel=info msg=\"  --exclude-local-address=''\" subsys=daemon\nlevel=info msg=\"  --fixed-identity-mapping='map[]'\" subsys=daemon\nlevel=info msg=\"  --flannel-master-device=''\" subsys=daemon\nlevel=info msg=\"  --flannel-uninstall-on-exit='false'\" subsys=daemon\nlevel=info msg=\"  --force-local-policy-eval-at-source='true'\" subsys=daemon\nlevel=info msg=\"  --gops-port='9890'\" subsys=daemon\nlevel=info msg=\"  --host-reachable-services-protos='tcp,udp'\" subsys=daemon\nlevel=info msg=\"  --http-403-msg=''\" subsys=daemon\nlevel=info msg=\"  --http-idle-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-max-grpc-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --http-normalize-path='true'\" subsys=daemon\nlevel=info msg=\"  --http-request-timeout='3600'\" subsys=daemon\nlevel=info msg=\"  --http-retry-count='3'\" subsys=daemon\nlevel=info msg=\"  --http-retry-timeout='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-disable-tls='false'\" subsys=daemon\nlevel=info msg=\"  --hubble-event-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --hubble-flow-buffer-size='4095'\" subsys=daemon\nlevel=info msg=\"  --hubble-listen-address=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics=''\" subsys=daemon\nlevel=info msg=\"  --hubble-metrics-server=''\" subsys=daemon\nlevel=info msg=\"  --hubble-socket-path='/var/run/cilium/hubble.sock'\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-cert-file=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-client-ca-files=''\" subsys=daemon\nlevel=info msg=\"  --hubble-tls-key-file=''\" subsys=daemon\nlevel=info msg=\"  --identity-allocation-mode='crd'\" subsys=daemon\nlevel=info msg=\"  --identity-change-grace-period='5s'\" subsys=daemon\nlevel=info msg=\"  --install-iptables-rules='true'\" subsys=daemon\nlevel=info msg=\"  --ip-allocation-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'\" subsys=daemon\nlevel=info msg=\"  --ipam='kubernetes'\" subsys=daemon\nlevel=info msg=\"  --ipsec-key-file=''\" subsys=daemon\nlevel=info msg=\"  --iptables-lock-timeout='5s'\" subsys=daemon\nlevel=info msg=\"  --iptables-random-fully='false'\" subsys=daemon\nlevel=info msg=\"  --ipv4-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv4-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-loopback-address='169.254.42.1'\" subsys=daemon\nlevel=info msg=\"  --ipv4-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-cluster-alloc-cidr='f00d::/64'\" subsys=daemon\nlevel=info msg=\"  --ipv6-mcast-device=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-node='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-pod-subnets=''\" subsys=daemon\nlevel=info msg=\"  --ipv6-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipv6-service-range='auto'\" subsys=daemon\nlevel=info msg=\"  --ipvlan-master-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --join-cluster='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-api-server=''\" subsys=daemon\nlevel=info msg=\"  --k8s-force-json-patch='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-heartbeat-timeout='30s'\" subsys=daemon\nlevel=info msg=\"  --k8s-kubeconfig-path=''\" subsys=daemon\nlevel=info msg=\"  --k8s-namespace='kube-system'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv4-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-require-ipv6-pod-cidr='false'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-cache-size='128'\" subsys=daemon\nlevel=info msg=\"  --k8s-service-proxy-name=''\" subsys=daemon\nlevel=info msg=\"  --k8s-sync-timeout='3m0s'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'\" subsys=daemon\nlevel=info msg=\"  --k8s-watcher-queue-size='1024'\" subsys=daemon\nlevel=info msg=\"  --keep-config='false'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement='strict'\" subsys=daemon\nlevel=info msg=\"  --kube-proxy-replacement-healthz-bind-address=''\" subsys=daemon\nlevel=info msg=\"  --kvstore=''\" subsys=daemon\nlevel=info msg=\"  --kvstore-connectivity-timeout='2m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-lease-ttl='15m0s'\" subsys=daemon\nlevel=info msg=\"  --kvstore-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --kvstore-periodic-sync='5m0s'\" subsys=daemon\nlevel=info msg=\"  --label-prefix-file=''\" subsys=daemon\nlevel=info msg=\"  --labels=''\" subsys=daemon\nlevel=info msg=\"  --lib-dir='/var/lib/cilium'\" subsys=daemon\nlevel=info msg=\"  --log-driver=''\" subsys=daemon\nlevel=info msg=\"  --log-opt='map[]'\" subsys=daemon\nlevel=info msg=\"  --log-system-load='false'\" subsys=daemon\nlevel=info msg=\"  --masquerade='true'\" subsys=daemon\nlevel=info msg=\"  --max-controller-interval='0'\" subsys=daemon\nlevel=info msg=\"  --metrics=''\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation='medium'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-flags='syn,fin,rst'\" subsys=daemon\nlevel=info msg=\"  --monitor-aggregation-interval='5s'\" subsys=daemon\nlevel=info msg=\"  --monitor-queue-size='0'\" subsys=daemon\nlevel=info msg=\"  --mtu='0'\" subsys=daemon\nlevel=info msg=\"  --nat46-range='0:0:0:0:0:FFFF::/96'\" subsys=daemon\nlevel=info msg=\"  --native-routing-cidr=''\" subsys=daemon\nlevel=info msg=\"  --node-port-acceleration='disabled'\" subsys=daemon\nlevel=info msg=\"  --node-port-algorithm='random'\" subsys=daemon\nlevel=info msg=\"  --node-port-bind-protection='true'\" subsys=daemon\nlevel=info msg=\"  --node-port-mode='snat'\" subsys=daemon\nlevel=info msg=\"  --node-port-range='30000,32767'\" subsys=daemon\nlevel=info msg=\"  --policy-audit-mode='false'\" subsys=daemon\nlevel=info msg=\"  --policy-queue-size='100'\" subsys=daemon\nlevel=info msg=\"  --policy-trigger-interval='1s'\" subsys=daemon\nlevel=info msg=\"  --pprof='false'\" subsys=daemon\nlevel=info msg=\"  --preallocate-bpf-maps='false'\" subsys=daemon\nlevel=info msg=\"  --prefilter-device='undefined'\" subsys=daemon\nlevel=info msg=\"  --prefilter-mode='native'\" subsys=daemon\nlevel=info msg=\"  --prepend-iptables-chains='true'\" subsys=daemon\nlevel=info msg=\"  --prometheus-serve-addr=''\" subsys=daemon\nlevel=info msg=\"  --proxy-connect-timeout='1'\" subsys=daemon\nlevel=info msg=\"  --proxy-prometheus-port='0'\" subsys=daemon\nlevel=info msg=\"  --read-cni-conf=''\" subsys=daemon\nlevel=info msg=\"  --restore='true'\" subsys=daemon\nlevel=info msg=\"  --sidecar-istio-proxy-image='cilium/istio_proxy'\" subsys=daemon\nlevel=info msg=\"  --single-cluster-route='false'\" subsys=daemon\nlevel=info msg=\"  --skip-crd-creation='false'\" subsys=daemon\nlevel=info msg=\"  --socket-path='/var/run/cilium/cilium.sock'\" subsys=daemon\nlevel=info msg=\"  --sockops-enable='false'\" subsys=daemon\nlevel=info msg=\"  --state-dir='/var/run/cilium'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-dns-reject-response-code='refused'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-enable-dns-compression='true'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-endpoint-max-ip-per-hostname='50'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-idle-connection-grace-period='0s'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-max-deferred-connection-deletes='10000'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-min-ttl='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-pre-cache=''\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-port='0'\" subsys=daemon\nlevel=info msg=\"  --tofqdns-proxy-response-max-delay='100ms'\" subsys=daemon\nlevel=info msg=\"  --trace-payloadlen='128'\" subsys=daemon\nlevel=info msg=\"  --tunnel='vxlan'\" subsys=daemon\nlevel=info msg=\"  --version='false'\" subsys=daemon\nlevel=info msg=\"  --write-cni-conf-when-ready=''\" subsys=daemon\nlevel=info msg=\"     _ _ _\" subsys=daemon\nlevel=info msg=\" ___|_| |_|_ _ _____\" subsys=daemon\nlevel=info msg=\"|  _| | | | | |     |\" subsys=daemon\nlevel=info msg=\"|___|_|_|_|___|_|_|_|\" subsys=daemon\nlevel=info msg=\"Cilium 1.9.7 f993696 2021-05-12T18:21:30-07:00 go version go1.15.12 linux/amd64\" subsys=daemon\nlevel=info msg=\"cilium-envoy  version: 82a70d56bf324287ced3129300db609eceb21d10/1.17.3/Distribution/RELEASE/BoringSSL\" subsys=daemon\nlevel=info msg=\"clang (10.0.0) and kernel (4.19.0) versions: OK!\" subsys=linux-datapath\nlevel=info msg=\"linking environment: OK!\" subsys=linux-datapath\nlevel=info msg=\"Detected mounted BPF filesystem at /sys/fs/bpf\" subsys=bpf\nlevel=info msg=\"Parsing base label prefixes from default label list\" subsys=labels-filter\nlevel=info msg=\"Parsing additional label prefixes from user inputs: []\" subsys=labels-filter\nlevel=info msg=\"Final label prefixes to be used for identity evaluation:\" subsys=labels-filter\nlevel=info msg=\" - reserved:.*\" subsys=labels-filter\nlevel=info msg=\" - :io.kubernetes.pod.namespace\" subsys=labels-filter\nlevel=info msg=\" - :io.cilium.k8s.namespace.labels\" subsys=labels-filter\nlevel=info msg=\" - :app.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:io.kubernetes\" subsys=labels-filter\nlevel=info msg=\" - !:kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:.*beta.kubernetes.io\" subsys=labels-filter\nlevel=info msg=\" - !:k8s.io\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-generation\" subsys=labels-filter\nlevel=info msg=\" - !:pod-template-hash\" subsys=labels-filter\nlevel=info msg=\" - !:controller-revision-hash\" subsys=labels-filter\nlevel=info msg=\" - !:annotation.*\" subsys=labels-filter\nlevel=info msg=\" - !:etcd_node\" subsys=labels-filter\nlevel=info msg=\"Using autogenerated IPv4 allocation range\" subsys=node v4Prefix=10.32.0.0/16\nlevel=info msg=\"Initializing daemon\" subsys=daemon\nlevel=info msg=\"Establishing connection to apiserver\" host=\"https://api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:443\" subsys=k8s\nlevel=info msg=\"Connected to apiserver\" subsys=k8s\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Trying to auto-enable \\\"enable-node-port\\\", \\\"enable-external-ips\\\", \\\"enable-host-reachable-services\\\", \\\"enable-host-port\\\", \\\"enable-session-affinity\\\" features\" subsys=daemon\nlevel=warning msg=\"Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.\" subsys=daemon\nlevel=info msg=\"BPF host routing is only available in native routing mode. Falling back to legacy host routing (enable-host-legacy-routing=true).\" subsys=daemon\nlevel=info msg=\"Inheriting MTU from external network interface\" device=ens5 ipAddr=172.20.61.32 mtu=9001 subsys=mtu\nlevel=info msg=\"Restored services from maps\" failed=0 restored=0 subsys=service\nlevel=info msg=\"Reading old endpoints...\" subsys=daemon\nlevel=info msg=\"No old endpoints found.\" subsys=daemon\nlevel=info msg=\"Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock\" subsys=envoy-manager\nlevel=error msg=\"Command execution failed\" cmd=\"[iptables -t mangle -n -L CILIUM_PRE_mangle]\" error=\"exit status 1\" subsys=iptables\nlevel=warning msg=\"# Warning: iptables-legacy tables present, use iptables-legacy to see them\" subsys=iptables\nlevel=warning msg=\"iptables: No chain/target/match by that name.\" subsys=iptables\nlevel=info msg=\"Waiting until all Cilium CRDs are available\" subsys=k8s\nlevel=info msg=\"All Cilium CRDs have been found and are available\" subsys=k8s\nlevel=info msg=\"Retrieved node information from kubernetes node\" nodeName=ip-172-20-61-32.ap-southeast-1.compute.internal subsys=k8s\nlevel=info msg=\"Received own node information from API server\" ipAddr.ipv4=172.20.61.32 ipAddr.ipv6=\"<nil>\" k8sNodeIP=172.20.61.32 labels=\"map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-61-32.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a]\" nodeName=ip-172-20-61-32.ap-southeast-1.compute.internal subsys=k8s v4Prefix=100.96.4.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"k8s mode: Allowing localhost to reach local endpoints\" subsys=daemon\nlevel=info msg=\"Using auto-derived devices for BPF node port\" devices=\"[ens5]\" directRoutingDevice=ens5 subsys=daemon\nlevel=info msg=\"Enabling k8s event listener\" subsys=k8s-watcher\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Removing stale endpoint interfaces\" subsys=daemon\nlevel=info msg=\"Skipping kvstore configuration\" subsys=daemon\nlevel=info msg=\"Initializing node addressing\" subsys=daemon\nlevel=info msg=\"Initializing kubernetes IPAM\" subsys=ipam v4Prefix=100.96.4.0/24 v6Prefix=\"<nil>\"\nlevel=info msg=\"Waiting until all pre-existing resources related to policy have been received\" subsys=k8s-watcher\nlevel=info msg=\"Restoring endpoints...\" subsys=daemon\nlevel=info msg=\"Endpoints restored\" failed=0 restored=0 subsys=daemon\nlevel=info msg=\"Addressing information:\" subsys=daemon\nlevel=info msg=\"  Cluster-Name: default\" subsys=daemon\nlevel=info msg=\"  Cluster-ID: 0\" subsys=daemon\nlevel=info msg=\"  Local node-name: ip-172-20-61-32.ap-southeast-1.compute.internal\" subsys=daemon\nlevel=info msg=\"  Node-IPv6: <nil>\" subsys=daemon\nlevel=info msg=\"  External-Node IPv4: 172.20.61.32\" subsys=daemon\nlevel=info msg=\"  Internal-Node IPv4: 100.96.4.5\" subsys=daemon\nlevel=info msg=\"  IPv4 allocation prefix: 100.96.4.0/24\" subsys=daemon\nlevel=info msg=\"  Loopback IPv4: 169.254.42.1\" subsys=daemon\nlevel=info msg=\"  Local IPv4 addresses:\" subsys=daemon\nlevel=info msg=\"  - 172.20.61.32\" subsys=daemon\nlevel=info msg=\"Creating or updating CiliumNode resource\" node=ip-172-20-61-32.ap-southeast-1.compute.internal subsys=nodediscovery\nlevel=info msg=\"Adding local node to cluster\" node=\"{ip-172-20-61-32.ap-southeast-1.compute.internal default [{ExternalIP 13.250.10.232} {InternalIP 172.20.61.32} {CiliumInternalIP 100.96.4.5}] 100.96.4.0/24 <nil> 100.96.4.87 <nil> 0 local 0 map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-61-32.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] 6}\" subsys=nodediscovery\nlevel=warning msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\" subsys=klog\nlevel=info msg=\"Successfully created CiliumNode resource\" subsys=nodediscovery\nlevel=info msg=\"Annotating k8s node\" subsys=daemon v4CiliumHostIP.IPv4=100.96.4.5 v4Prefix=100.96.4.0/24 v4healthIP.IPv4=100.96.4.87 v6CiliumHostIP.IPv6=\"<nil>\" v6Prefix=\"<nil>\" v6healthIP.IPv6=\"<nil>\"\nlevel=info msg=\"Initializing identity allocator\" subsys=identity-cache\nlevel=info msg=\"Cluster-ID is not specified, skipping ClusterMesh initialization\" subsys=daemon\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Setting up BPF datapath\" bpfClockSource=ktime bpfInsnSet=v2 subsys=datapath-loader\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.core.bpf_jit_enable sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1\nlevel=info msg=\"Setting sysctl\" subsys=datapath-loader sysParamName=kernel.timer_migration sysParamValue=0\nlevel=info msg=\"All pre-existing resources related to policy have been received; continuing\" subsys=k8s-watcher\nlevel=info msg=\"Adding new proxy port rules for cilium-dns-egress:45609\" proxy port name=cilium-dns-egress subsys=proxy\nlevel=info msg=\"Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock\" subsys=monitor-agent\nlevel=info msg=\"Validating configured node address ranges\" subsys=daemon\nlevel=info msg=\"Starting connection tracking garbage collector\" subsys=daemon\nlevel=info msg=\"Starting IP identity watcher\" subsys=ipcache\nlevel=info msg=\"Initial scan of connection tracking completed\" subsys=ct-gc\nlevel=info msg=\"Regenerating restored endpoints\" numRestored=0 subsys=daemon\nlevel=info msg=\"Datapath signal listener running\" subsys=signal\nlevel=info msg=\"Creating host endpoint\" subsys=daemon\nlevel=info msg=\"Finished regenerating restored endpoints\" regenerated=0 subsys=daemon total=0\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=274 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=274 identityLabels=\"k8s:kops.k8s.io/instancegroup=nodes-ap-southeast-1a,k8s:node-role.kubernetes.io/node,k8s:node.kubernetes.io/instance-type=t3.medium,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=274 identity=1 identityLabels=\"k8s:kops.k8s.io/instancegroup=nodes-ap-southeast-1a,k8s:node-role.kubernetes.io/node,k8s:node.kubernetes.io/instance-type=t3.medium,k8s:topology.kubernetes.io/region=ap-southeast-1,k8s:topology.kubernetes.io/zone=ap-southeast-1a,reserved:host\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Launching Cilium health daemon\" subsys=daemon\nlevel=info msg=\"Launching Cilium health endpoint\" subsys=daemon\nlevel=info msg=\"Started healthz status API server\" address=\"127.0.0.1:9876\" subsys=daemon\nlevel=info msg=\"Initializing Cilium API\" subsys=daemon\nlevel=info msg=\"Daemon initialization completed\" bootstrapTime=8.656100502s subsys=daemon\nlevel=info msg=\"Serving cilium API at unix:///var/run/cilium/cilium.sock\" subsys=daemon\nlevel=info msg=\"Hubble server is disabled\" subsys=hubble\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=ff15c523-c04c-11eb-93a7-0626662c84ae\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=4 rateLimiterSkipped=true subsys=rate uuid=ff15c523-c04c-11eb-93a7-0626662c84ae waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.4.8 ff14d6a1-c04c-11eb-93a7-0626662c84ae  }\" containerID=24189bbeecd84441c784a6889c08cd5288973511db9a874b366a06a5fc88eda9 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxcccf9fe52fe76 k8sPodName=kube-system/coredns-autoscaler-6f594f4c58-kb772 labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=86 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=86 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0,k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops,k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns-autoscaler,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=coredns-autoscaler\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name:core.addons.k8s.io k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version:1.4.0 k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by:kops k8s:io.cilium.k8s.namespace.labels.k8s-addon:core.addons.k8s.io k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:kube-system]\" subsys=crd-allocator\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Invalid state transition skipped\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=86 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4= ipv6= k8sPodName=/ line=544 subsys=endpoint\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io;k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0;k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops;k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=coredns-autoscaler;k8s:io.kubernetes.pod.namespace=kube-system;k8s:k8s-app=coredns-autoscaler;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=86 identity=15680 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/name=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.addon.kops.k8s.io/version=1.4.0,k8s:io.cilium.k8s.namespace.labels.app.kubernetes.io/managed-by=kops,k8s:io.cilium.k8s.namespace.labels.k8s-addon=core.addons.k8s.io,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns-autoscaler,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=coredns-autoscaler\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=86 identity=15680 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=876 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=876 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=876 identity=4 identityLabels=\"reserved:health\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.811893039s file-path=/var/run/cilium/state/templates/028ee032e491bf64d8d32b95fe5d87daddd5590a/bpf_host.o subsys=datapath-loader\nlevel=info msg=\"Compiled new BPF template\" BPFCompilationTime=1.917822904s file-path=/var/run/cilium/state/templates/a9f8b320d439d6521ddb7b3d6efffa2b4787c42b/bpf_lxc.o subsys=datapath-loader\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=86 identity=15680 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=86 identity=15680 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=2.38104216s subsys=rate totalDuration=2.381112398s uuid=ff15c523-c04c-11eb-93a7-0626662c84ae waitDurationTotal=0s\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=876 identity=4 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=274 identity=1 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Serving cilium health API at unix:///var/run/cilium/health.sock\" subsys=health-server\nlevel=warning msg=\"Unable to update ipcache map entry on pod add\" error=\"ipcache entry for podIP 100.96.4.8 owned by kvstore or agent\" hostIP=100.96.4.8 k8sNamespace=kube-system k8sPodName=coredns-autoscaler-6f594f4c58-kb772 podIP=100.96.4.8 podIPs=\"[{100.96.4.8}]\" subsys=k8s-watcher\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=3 rateLimiterSkipped=true subsys=rate uuid=72840b18-c04d-11eb-93a7-0626662c84ae\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=3 rateLimiterSkipped=true subsys=rate uuid=72840b18-c04d-11eb-93a7-0626662c84ae waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.4.85 72751d22-c04d-11eb-93a7-0626662c84ae  }\" containerID=105d4c34c4caf8440725b8a722b3a83f67d25d4fb63262b60fa005a8f35cfb5c datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc8d49ab23aa67 k8sPodName=pods-6606/pod-logs-websocket-407546c2-be85-4476-8f83-6d61e87e9d07 labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=479 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=479 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=pods,k8s:io.cilium.k8s.namespace.labels.e2e-run=eac7ba15-08b0-4b6e-8296-1f59abff1af8,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=pods-6606,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=pods-6606\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:pods k8s:io.cilium.k8s.namespace.labels.e2e-run:eac7ba15-08b0-4b6e-8296-1f59abff1af8 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:pods-6606]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=pods;k8s:io.cilium.k8s.namespace.labels.e2e-run=eac7ba15-08b0-4b6e-8296-1f59abff1af8;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=pods-6606;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=pods-6606;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=479 identity=56621 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=pods,k8s:io.cilium.k8s.namespace.labels.e2e-run=eac7ba15-08b0-4b6e-8296-1f59abff1af8,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=pods-6606,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=pods-6606\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=479 identity=56621 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=479 identity=56621 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=479 identity=56621 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=271.907557ms subsys=rate totalDuration=271.998786ms uuid=72840b18-c04d-11eb-93a7-0626662c84ae waitDurationTotal=0s\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=5 rateLimiterSkipped=true subsys=rate uuid=72ee661c-c04d-11eb-93a7-0626662c84ae\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=5 rateLimiterSkipped=true subsys=rate uuid=72ee661c-c04d-11eb-93a7-0626662c84ae waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.4.15 72ecabc6-c04d-11eb-93a7-0626662c84ae  }\" containerID=d5ab7d30f30a9a4c4b0fc70ae42a415ce46dd9fbc40c0e9c8520ed26ffecb021 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxcf6fb0d2a3e74 k8sPodName=services-3817/service-headless-mjpjs labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1558 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1558 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services,k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=services-3817,k8s:name=service-headless\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Reserved new local key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services;k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=services-3817;k8s:name=service-headless;\" subsys=allocator\nlevel=info msg=\"Reusing existing global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services;k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=services-3817;k8s:name=service-headless;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1558 identity=36581 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=services,k8s:io.cilium.k8s.namespace.labels.e2e-run=d740bb95-f430-47d7-935a-f5f0b65a850d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=services-3817,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=services-3817,k8s:name=service-headless\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1558 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1558 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=1558 identity=36581 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=278.353348ms subsys=rate totalDuration=278.445993ms uuid=72ee661c-c04d-11eb-93a7-0626662c84ae waitDurationTotal=0s\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=6 rateLimiterSkipped=true subsys=rate uuid=731d12fb-c04d-11eb-93a7-0626662c84ae\nlevel=info msg=\"API request released by rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=6 rateLimiterSkipped=true subsys=rate uuid=731d12fb-c04d-11eb-93a7-0626662c84ae waitDurationTotal=0s\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.4.37 73198c04-c04d-11eb-93a7-0626662c84ae  }\" containerID=fecc176b4f1481ded4d64ed8bff89d3d8a9bcf8ffd384238cbdc7dce88902743 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc9d210ae54e82 k8sPodName=projected-4322/pod-projected-secrets-66a58fc4-bf5c-49a8-817d-d38176ac13d5 labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=354 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=354 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=projected,k8s:io.cilium.k8s.namespace.labels.e2e-run=3548d597-3ead-4b26-8f13-1aa34be3f18d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=projected-4322,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=projected-4322\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:projected k8s:io.cilium.k8s.namespace.labels.e2e-run:3548d597-3ead-4b26-8f13-1aa34be3f18d k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:projected-4322]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=projected;k8s:io.cilium.k8s.namespace.labels.e2e-run=3548d597-3ead-4b26-8f13-1aa34be3f18d;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=projected-4322;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=projected-4322;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=354 identity=43975 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=projected,k8s:io.cilium.k8s.namespace.labels.e2e-run=3548d597-3ead-4b26-8f13-1aa34be3f18d,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=projected-4322,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=projected-4322\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=354 identity=43975 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=354 identity=43975 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=354 identity=43975 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=295.558451ms subsys=rate totalDuration=295.646969ms uuid=731d12fb-c04d-11eb-93a7-0626662c84ae waitDurationTotal=0s\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=info msg=\"Processing API request with rate limiter\" maxWaitDuration=15s name=endpoint-create parallelRequests=6 subsys=rate uuid=73edb321-c04d-11eb-93a7-0626662c84ae\nlevel=info msg=\"API request released by rate limiter\" burst=7 limit=1.24/s maxWaitDuration=15s maxWaitDurationLimiter=14.999887098s name=endpoint-create parallelRequests=6 subsys=rate uuid=73edb321-c04d-11eb-93a7-0626662c84ae waitDurationLimiter=0s waitDurationTotal=\"125.478µs\"\nlevel=info msg=\"Create endpoint request\" addressing=\"&{100.96.4.30 73eab8b2-c04d-11eb-93a7-0626662c84ae  }\" containerID=f77e96bf2e79f2a6d72abc7ef9d48ff2eff4c3fd652ad9bacc55b2dafbfc43e6 datapathConfiguration=\"&{false false false false <nil>}\" interface=lxc393be23ad754 k8sPodName=secrets-3914/pod-secrets-a94f9c35-f757-4cd7-964b-f48aff0dbe25 labels=\"[]\" subsys=daemon sync-build=true\nlevel=info msg=\"New endpoint\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3351 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Resolving identity labels (blocking)\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3351 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=secrets,k8s:io.cilium.k8s.namespace.labels.e2e-run=107d7480-0179-4393-a5e6-a2876833b10c,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=secrets-3914,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=secrets-3914\" ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination\" labels=\"map[k8s:io.cilium.k8s.namespace.labels.e2e-framework:secrets k8s:io.cilium.k8s.namespace.labels.e2e-run:107d7480-0179-4393-a5e6-a2876833b10c k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name:secrets-3914]\" subsys=crd-allocator\nlevel=info msg=\"Allocated new global key\" key=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=secrets;k8s:io.cilium.k8s.namespace.labels.e2e-run=107d7480-0179-4393-a5e6-a2876833b10c;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=secrets-3914;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=secrets-3914;\" subsys=allocator\nlevel=info msg=\"Identity of endpoint changed\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3351 identity=44672 identityLabels=\"k8s:io.cilium.k8s.namespace.labels.e2e-framework=secrets,k8s:io.cilium.k8s.namespace.labels.e2e-run=107d7480-0179-4393-a5e6-a2876833b10c,k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=secrets-3914,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=secrets-3914\" ipv4= ipv6= k8sPodName=/ oldIdentity=\"no identity\" subsys=endpoint\nlevel=info msg=\"Waiting for endpoint to be generated\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3351 identity=44672 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Rewrote endpoint BPF program\" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3351 identity=44672 ipv4= ipv6= k8sPodName=/ subsys=endpoint\nlevel=info msg=\"Successful endpoint creation\" containerID= datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=3351 identity=44672 ipv4= ipv6= k8sPodName=/ subsys=daemon\nlevel=info msg=\"API call has been processed\" name=endpoint-create processingDuration=256.685883ms subsys=rate totalDuration=256.832016ms uuid=73edb321-c04d-11eb-93a7-0626662c84ae waitDurationTotal=\"125.478µs\"\nlevel=info msg=\"regenerating all endpoints\" reason=\"one or more identities created or deleted\" subsys=endpoint-manager\nlevel=warning msg=\"Unable to update ipcache map entry on pod add\" error=\"ipcache entry for podIP 100.96.4.85 owned by kvstore or agent\" hostIP=100.96.4.85 k8sNamespace=pods-6606 k8sPodName=pod-logs-websocket-407546c2-be85-4476-8f83-6d61e87e9d07 podIP=100.96.4.85 podIPs=\"[{100.96.4.85}]\" subsys=k8s-watcher\nlevel=info msg=\"regenerating all endpoints\" reason= subsys=endpoint-manager\n==== END logs for container cilium-agent of pod kube-system/cilium-wzxqf ====\n==== START logs for container autoscaler of pod kube-system/coredns-autoscaler-6f594f4c58-kb772 ====\nI0529 07:10:55.339900       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns\nI0529 07:10:55.596004       1 autoscaler_server.go:157] ConfigMap not found: configmaps \"coredns-autoscaler\" not found, will create one with default params\nI0529 07:10:55.603737       1 k8sclient.go:147] Created ConfigMap coredns-autoscaler in namespace kube-system\nI0529 07:10:55.603759       1 plugin.go:50] Set control mode to linear\nI0529 07:10:55.603765       1 linear_controller.go:60] ConfigMap version change (old:  new: 874) - rebuilding params\nI0529 07:10:55.603770       1 linear_controller.go:61] Params from apiserver: \n{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}\nI0529 07:10:55.603826       1 linear_controller.go:80] Defaulting min replicas count to 1 for linear controller\nI0529 07:10:55.609850       1 k8sclient.go:272] Cluster status: SchedulableNodes[5], SchedulableCores[10]\nI0529 07:10:55.609868       1 k8sclient.go:273] Replicas are not as expected : updating replicas from 1 to 2\n==== END logs for container autoscaler of pod kube-system/coredns-autoscaler-6f594f4c58-kb772 ====\n==== START logs for container coredns of pod kube-system/coredns-f45c4bf76-5xwkz ====\nW0529 07:10:48.673671       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0529 07:10:48.674588       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\n.:53\n[INFO] plugin/reload: Running configuration MD5 = ce1e85197887ce49f3d78b19ce3dfa68\nCoreDNS-1.8.3\nlinux/amd64, go1.16, 4293992\n==== END logs for container coredns of pod kube-system/coredns-f45c4bf76-5xwkz ====\n==== START logs for container coredns of pod kube-system/coredns-f45c4bf76-hwsqm ====\nW0529 07:11:02.869362       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0529 07:11:02.870387       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\n.:53\n[INFO] plugin/reload: Running configuration MD5 = ce1e85197887ce49f3d78b19ce3dfa68\nCoreDNS-1.8.3\nlinux/amd64, go1.16, 4293992\n==== END logs for container coredns of pod kube-system/coredns-f45c4bf76-hwsqm ====\n==== START logs for container dns-controller of pod kube-system/dns-controller-5f98b58844-2t8db ====\ndns-controller version 0.1\nI0529 07:09:30.274242       1 main.go:199] initializing the watch controllers, namespace: \"\"\nI0529 07:09:30.274276       1 main.go:223] Ingress controller disabled\nI0529 07:09:30.275406       1 dnscontroller.go:108] starting DNS controller\nI0529 07:09:30.275409       1 node.go:60] starting node controller\nI0529 07:09:30.275898       1 dnscontroller.go:170] scope not yet ready: service\nI0529 07:09:30.275910       1 pod.go:60] starting pod controller\nI0529 07:09:30.276601       1 service.go:60] starting service controller\nI0529 07:09:30.309286       1 dnscontroller.go:625] Update desired state: node/ip-172-20-36-217.ap-southeast-1.compute.internal: [{A node/ip-172-20-36-217.ap-southeast-1.compute.internal/internal 172.20.36.217 true} {A node/ip-172-20-36-217.ap-southeast-1.compute.internal/external 13.212.113.26 true} {A node/role=master/internal 172.20.36.217 true} {A node/role=master/external 13.212.113.26 true} {A node/role=master/ ip-172-20-36-217.ap-southeast-1.compute.internal true} {A node/role=master/ ip-172-20-36-217.ap-southeast-1.compute.internal true} {A node/role=master/ ec2-13-212-113-26.ap-southeast-1.compute.amazonaws.com true}]\nI0529 07:09:35.276366       1 dnscache.go:74] querying all DNS zones (no cached results)\nI0529 07:09:45.622199       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal: [{_alias api.e2e-459b123097-cb70c.test-cncf-aws.k8s.io. node/ip-172-20-36-217.ap-southeast-1.compute.internal/external false}]\nI0529 07:09:46.278960       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0529 07:09:46.278992       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0529 07:09:48.637120       1 dnscontroller.go:625] Update desired state: pod/kube-system/kube-apiserver-ip-172-20-36-217.ap-southeast-1.compute.internal: [{_alias api.e2e-459b123097-cb70c.test-cncf-aws.k8s.io. node/ip-172-20-36-217.ap-southeast-1.compute.internal/external false} {A api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io. 172.20.36.217 false}]\nI0529 07:09:50.177242       1 dnscontroller.go:585] Adding DNS changes to batch {A api.e2e-459b123097-cb70c.test-cncf-aws.k8s.io.} [13.212.113.26]\nI0529 07:09:50.177279       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0529 07:09:50.330060       1 dnscontroller.go:625] Update desired state: pod/kube-system/kops-controller-hqjvz: [{A kops-controller.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io. 172.20.36.217 false}]\nI0529 07:09:55.497562       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0529 07:09:55.497657       1 dnscontroller.go:482] Querying all dnsprovider records for zone \"test-cncf-aws.k8s.io.\"\nI0529 07:09:58.522429       1 dnscontroller.go:585] Adding DNS changes to batch {A kops-controller.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io.} [172.20.36.217]\nI0529 07:09:58.522457       1 dnscontroller.go:274] Using default TTL of 1m0s\nI0529 07:09:58.522836       1 dnscontroller.go:585] Adding DNS changes to batch {A api.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io.} [172.20.36.217]\nI0529 07:09:58.522856       1 dnscontroller.go:323] Applying DNS changeset for zone test-cncf-aws.k8s.io.::ZEMLNXIIWQ0RV\nI0529 07:10:12.886786       1 dnscontroller.go:625] Update desired state: node/ip-172-20-59-92.ap-southeast-1.compute.internal: [{A node/ip-172-20-59-92.ap-southeast-1.compute.internal/internal 172.20.59.92 true} {A node/ip-172-20-59-92.ap-southeast-1.compute.internal/external 54.169.50.147 true} {A node/role=node/internal 172.20.59.92 true} {A node/role=node/external 54.169.50.147 true} {A node/role=node/ ip-172-20-59-92.ap-southeast-1.compute.internal true} {A node/role=node/ ip-172-20-59-92.ap-southeast-1.compute.internal true} {A node/role=node/ ec2-54-169-50-147.ap-southeast-1.compute.amazonaws.com true}]\nI0529 07:10:13.189400       1 dnscontroller.go:625] Update desired state: node/ip-172-20-54-213.ap-southeast-1.compute.internal: [{A node/ip-172-20-54-213.ap-southeast-1.compute.internal/internal 172.20.54.213 true} {A node/ip-172-20-54-213.ap-southeast-1.compute.internal/external 54.255.203.155 true} {A node/role=node/internal 172.20.54.213 true} {A node/role=node/external 54.255.203.155 true} {A node/role=node/ ip-172-20-54-213.ap-southeast-1.compute.internal true} {A node/role=node/ ip-172-20-54-213.ap-southeast-1.compute.internal true} {A node/role=node/ ec2-54-255-203-155.ap-southeast-1.compute.amazonaws.com true}]\nI0529 07:10:13.456008       1 dnscontroller.go:625] Update desired state: node/ip-172-20-56-44.ap-southeast-1.compute.internal: [{A node/ip-172-20-56-44.ap-southeast-1.compute.internal/internal 172.20.56.44 true} {A node/ip-172-20-56-44.ap-southeast-1.compute.internal/external 13.228.203.244 true} {A node/role=node/internal 172.20.56.44 true} {A node/role=node/external 13.228.203.244 true} {A node/role=node/ ip-172-20-56-44.ap-southeast-1.compute.internal true} {A node/role=node/ ip-172-20-56-44.ap-southeast-1.compute.internal true} {A node/role=node/ ec2-13-228-203-244.ap-southeast-1.compute.amazonaws.com true}]\nI0529 07:10:22.884261       1 dnscontroller.go:625] Update desired state: node/ip-172-20-61-32.ap-southeast-1.compute.internal: [{A node/ip-172-20-61-32.ap-southeast-1.compute.internal/internal 172.20.61.32 true} {A node/ip-172-20-61-32.ap-southeast-1.compute.internal/external 13.250.10.232 true} {A node/role=node/internal 172.20.61.32 true} {A node/role=node/external 13.250.10.232 true} {A node/role=node/ ip-172-20-61-32.ap-southeast-1.compute.internal true} {A node/role=node/ ip-172-20-61-32.ap-southeast-1.compute.internal true} {A node/role=node/ ec2-13-250-10-232.ap-southeast-1.compute.amazonaws.com true}]\n==== END logs for container dns-controller of pod kube-system/dns-controller-5f98b58844-2t8db ====\n==== START logs for container etcd-manager of pod kube-system/etcd-manager-events-ip-172-20-36-217.ap-southeast-1.compute.internal ====\netcd-manager\nI0529 07:08:13.454123    3121 volumes.go:86] AWS API Request: ec2metadata/GetToken\nI0529 07:08:13.455067    3121 volumes.go:86] AWS API Request: ec2metadata/GetDynamicData\nI0529 07:08:13.455868    3121 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0529 07:08:13.456311    3121 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0529 07:08:13.456781    3121 volumes.go:86] AWS API Request: ec2metadata/GetMetadata\nI0529 07:08:13.457283    3121 main.go:305] Mounting available etcd volumes matching tags [k8s.io/etcd/events k8s.io/role/master=1 kubernetes.io/cluster/e2e-459b123097-cb70c.test-cncf-aws.k8s.io=owned]; nameTag=k8s.io/etcd/events\nI0529 07:08:13.459151    3121 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0529 07:08:13.584645    3121 mounter.go:304] Trying to mount master volume: \"vol-0f9e3c5f961485f6d\"\nI0529 07:08:13.584664    3121 volumes.go:331] Trying to attach volume \"vol-0f9e3c5f961485f6d\" at \"/dev/xvdu\"\nI0529 07:08:13.584777    3121 volumes.go:86] AWS API Request: ec2/AttachVolume\nI0529 07:08:14.001331    3121 volumes.go:349] AttachVolume request returned {\n  AttachTime: 2021-05-29 07:08:13.987 +0000 UTC,\n  Device: \"/dev/xvdu\",\n  InstanceId: \"i-07708f476b85f31f9\",\n  State: \"attaching\",\n  VolumeId: \"vol-0f9e3c5f961485f6d\"\n}\nI0529 07:08:14.001516    3121 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0529 07:08:14.130513    3121 mounter.go:318] Currently attached volumes: [0xc00007e500]\nI0529 07:08:14.130531    3121 mounter.go:72] Master volume \"vol-0f9e3c5f961485f6d\" is attached at \"/dev/xvdu\"\nI0529 07:08:14.131285    3121 mounter.go:86] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-0f9e3c5f961485f6d\nI0529 07:08:14.131306    3121 volumes.go:234] volume vol-0f9e3c5f961485f6d not mounted at /rootfs/dev/xvdu\nI0529 07:08:14.131336    3121 volumes.go:263] nvme path not found \"/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f9e3c5f961485f6d\"\nI0529 07:08:14.131359    3121 volumes.go:251] volume vol-0f9e3c5f961485f6d not mounted at nvme-Amazon_Elastic_Block_Store_vol0f9e3c5f961485f6d\nI0529 07:08:14.131373    3121 mounter.go:121] Waiting for volume \"vol-0f9e3c5f961485f6d\" to be mounted\nI0529 07:08:15.131493    3121 volumes.go:234] volume vol-0f9e3c5f961485f6d not mounted at /rootfs/dev/xvdu\nI0529 07:08:15.131738    3121 volumes.go:248] found nvme volume \"nvme-Amazon_Elastic_Block_Store_vol0f9e3c5f961485f6d\" at \"/dev/nvme1n1\"\nI0529 07:08:15.131818    3121 mounter.go:125] Found volume \"vol-0f9e3c5f961485f6d\" mounted at device \"/dev/nvme1n1\"\nI0529 07:08:15.132536    3121 mounter.go:171] Creating mount directory \"/rootfs/mnt/master-vol-0f9e3c5f961485f6d\"\nI0529 07:08:15.132748    3121 mounter.go:176] Mounting device \"/dev/nvme1n1\" on \"/mnt/master-vol-0f9e3c5f961485f6d\"\nI0529 07:08:15.132835    3121 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0529 07:08:15.132971    3121 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0529 07:08:15.151798    3121 mount_linux.go:449] Output: \"\"\nI0529 07:08:15.151827    3121 mount_linux.go:408] Disk \"/dev/nvme1n1\" appears to be unformatted, attempting to format as type: \"ext4\" with options: [-F -m0 /dev/nvme1n1]\nI0529 07:08:15.151846    3121 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- mkfs.ext4 -F -m0 /dev/nvme1n1]\nI0529 07:08:15.495689    3121 mount_linux.go:418] Disk successfully formatted (mkfs): ext4 - /dev/nvme1n1 /mnt/master-vol-0f9e3c5f961485f6d\nI0529 07:08:15.495708    3121 mount_linux.go:436] Attempting to mount disk /dev/nvme1n1 in ext4 format at /mnt/master-vol-0f9e3c5f961485f6d\nI0529 07:08:15.495725    3121 nsenter.go:80] nsenter mount /dev/nvme1n1 /mnt/master-vol-0f9e3c5f961485f6d ext4 [defaults]\nI0529 07:08:15.495745    3121 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/systemd-run --description=Kubernetes transient mount for /mnt/master-vol-0f9e3c5f961485f6d --scope -- /bin/mount -t ext4 -o defaults /dev/nvme1n1 /mnt/master-vol-0f9e3c5f961485f6d]\nI0529 07:08:15.518636    3121 nsenter.go:84] Output of mounting /dev/nvme1n1 to /mnt/master-vol-0f9e3c5f961485f6d: Running scope as unit: run-rdc8ff32b698d434fbc5b933ba5c4a5f2.scope\nI0529 07:08:15.518659    3121 mount_linux.go:446] Attempting to determine if disk \"/dev/nvme1n1\" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme1n1])\nI0529 07:08:15.518680    3121 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- blkid -p -s TYPE -s PTTYPE -o export /dev/nvme1n1]\nI0529 07:08:15.533112    3121 mount_linux.go:449] Output: \"DEVNAME=/dev/nvme1n1\\nTYPE=ext4\\n\"\nI0529 07:08:15.533133    3121 resizefs_linux.go:53] ResizeFS.Resize - Expanding mounted volume /dev/nvme1n1\nI0529 07:08:15.533147    3121 nsenter.go:132] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- resize2fs /dev/nvme1n1]\nI0529 07:08:15.539110    3121 resizefs_linux.go:68] Device /dev/nvme1n1 resized successfully\nI0529 07:08:15.549884    3121 mount_linux.go:206] Detected OS with systemd\nI0529 07:08:15.551676    3121 mounter.go:224] mounting inside container: /rootfs/dev/nvme1n1 -> /rootfs/mnt/master-vol-0f9e3c5f961485f6d\nI0529 07:08:15.551695    3121 mount_linux.go:175] Mounting cmd (systemd-run) with arguments (--description=Kubernetes transient mount for /rootfs/mnt/master-vol-0f9e3c5f961485f6d --scope -- mount  /rootfs/dev/nvme1n1 /rootfs/mnt/master-vol-0f9e3c5f961485f6d)\nI0529 07:08:15.562985    3121 mounter.go:94] mounted master volume \"vol-0f9e3c5f961485f6d\" on /mnt/master-vol-0f9e3c5f961485f6d\nI0529 07:08:15.563011    3121 main.go:320] discovered IP address: 172.20.36.217\nI0529 07:08:15.563016    3121 main.go:325] Setting data dir to /rootfs/mnt/master-vol-0f9e3c5f961485f6d\nI0529 07:08:15.737159    3121 certs.go:183] generating certificate for \"etcd-manager-server-etcd-events-a\"\nI0529 07:08:15.987750    3121 certs.go:183] generating certificate for \"etcd-manager-client-etcd-events-a\"\nI0529 07:08:15.991692    3121 server.go:87] starting GRPC server using TLS, ServerName=\"etcd-manager-server-etcd-events-a\"\nI0529 07:08:15.991998    3121 main.go:474] peerClientIPs: [172.20.36.217]\nI0529 07:08:16.180480    3121 certs.go:183] generating certificate for \"etcd-manager-etcd-events-a\"\nI0529 07:08:16.182428    3121 server.go:105] GRPC server listening on \"172.20.36.217:3997\"\nI0529 07:08:16.182780    3121 volumes.go:86] AWS API Request: ec2/DescribeVolumes\nI0529 07:08:16.302333    3121 volumes.go:86] AWS API Request: ec2/DescribeInstances\nI0529 07:08:16.356501    3121 peers.go:115] found new candidate peer from discovery: etcd-events-a [{172.20.36.217 0} {172.20.36.217 0}]\nI0529 07:08:16.356539    3121 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:[172.20.36.217 172.20.36.217]], final=map[172.20.36.217:[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io]]\nI0529 07:08:16.356806    3121 peers.go:295] connecting to peer \"etcd-events-a\" with TLS policy, servername=\"etcd-manager-server-etcd-events-a\"\nI0529 07:08:18.183401    3121 controller.go:189] starting controller iteration\nI0529 07:08:18.183948    3121 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" > leadership_token:\"fh19lkMhXp1GMdTrrY4WMA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" > > \nI0529 07:08:18.184216    3121 commands.go:41] refreshing commands\nI0529 07:08:18.184411    3121 s3context.go:334] product_uuid is \"ec2036c7-7d52-3501-53d4-88ceeb9592de\", assuming running on EC2\nI0529 07:08:18.185850    3121 s3context.go:166] got region from metadata: \"ap-southeast-1\"\nI0529 07:08:18.212259    3121 s3context.go:213] found bucket in region \"us-west-1\"\nI0529 07:08:19.008153    3121 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0529 07:08:19.008175    3121 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0529 07:08:29.201024    3121 controller.go:189] starting controller iteration\nI0529 07:08:29.201052    3121 controller.go:266] Broadcasting leadership assertion with token \"fh19lkMhXp1GMdTrrY4WMA\"\nI0529 07:08:29.201310    3121 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" > leadership_token:\"fh19lkMhXp1GMdTrrY4WMA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" > > \nI0529 07:08:29.201446    3121 controller.go:295] I am leader with token \"fh19lkMhXp1GMdTrrY4WMA\"\nI0529 07:08:29.201753    3121 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\" > }\nI0529 07:08:29.201806    3121 controller.go:303] etcd cluster members: map[]\nI0529 07:08:29.201817    3121 controller.go:641] sending member map to all peers: \nI0529 07:08:29.202038    3121 commands.go:38] not refreshing commands - TTL not hit\nI0529 07:08:29.202052    3121 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0529 07:08:29.947331    3121 controller.go:359] detected that there is no existing cluster\nI0529 07:08:29.947346    3121 commands.go:41] refreshing commands\nI0529 07:08:30.208762    3121 vfs.go:120] listed commands in s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/control: 0 commands\nI0529 07:08:30.208784    3121 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0529 07:08:30.398869    3121 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\" addresses:\"172.20.36.217\" > \nI0529 07:08:30.399223    3121 etcdserver.go:248] updating hosts: map[172.20.36.217:[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io]]\nI0529 07:08:30.399256    3121 hosts.go:84] hosts update: primary=map[172.20.36.217:[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:[172.20.36.217 172.20.36.217]], final=map[172.20.36.217:[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io]]\nI0529 07:08:30.399389    3121 hosts.go:181] skipping update of unchanged /etc/hosts\nI0529 07:08:30.399558    3121 newcluster.go:136] starting new etcd cluster with [etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\" > }]\nI0529 07:08:30.400023    3121 newcluster.go:153] JoinClusterResponse: \nI0529 07:08:30.401018    3121 etcdserver.go:556] starting etcd with state new_cluster:true cluster:<cluster_token:\"NWfPtESfzkcrfnlnQfAn1w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true \nI0529 07:08:30.401053    3121 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0f9e3c5f961485f6d/data/NWfPtESfzkcrfnlnQfAn1w\nI0529 07:08:30.401907    3121 pki.go:59] adding peerClientIPs [172.20.36.217]\nI0529 07:08:30.401930    3121 pki.go:67] generating peer keypair for etcd: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io] IPs:[172.20.36.217 127.0.0.1]} Usages:[2 1]}\nI0529 07:08:30.502367    3121 certs.go:183] generating certificate for \"etcd-events-a\"\nI0529 07:08:30.504479    3121 pki.go:110] building client-serving certificate: {CommonName:etcd-events-a Organization:[] AltNames:{DNSNames:[etcd-events-a etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io] IPs:[127.0.0.1]} Usages:[1 2]}\nI0529 07:08:30.606166    3121 certs.go:183] generating certificate for \"etcd-events-a\"\nI0529 07:08:30.665460    3121 certs.go:183] generating certificate for \"etcd-events-a\"\nI0529 07:08:30.667406    3121 etcdprocess.go:203] executing command /opt/etcd-v3.4.13-linux-amd64/etcd [/opt/etcd-v3.4.13-linux-amd64/etcd]\nI0529 07:08:30.668000    3121 newcluster.go:171] JoinClusterResponse: \nI0529 07:08:30.668133    3121 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-spec\"\nI0529 07:08:30.668197    3121 s3context.go:241] Checking default bucket encryption for \"k8s-kops-prow\"\n2021-05-29 07:08:30.674389 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\n2021-05-29 07:08:30.674417 I | pkg/flags: recognized and used environment variable ETCD_CERT_FILE=/rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/clients/server.crt\n2021-05-29 07:08:30.674424 I | pkg/flags: recognized and used environment variable ETCD_CLIENT_CERT_AUTH=true\n2021-05-29 07:08:30.674537 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/rootfs/mnt/master-vol-0f9e3c5f961485f6d/data/NWfPtESfzkcrfnlnQfAn1w\n2021-05-29 07:08:30.674553 I | pkg/flags: recognized and used environment variable ETCD_ENABLE_V2=false\n2021-05-29 07:08:30.674578 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\n2021-05-29 07:08:30.674584 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a=https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\n2021-05-29 07:08:30.674612 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=new\n2021-05-29 07:08:30.674619 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=NWfPtESfzkcrfnlnQfAn1w\n2021-05-29 07:08:30.674624 I | pkg/flags: recognized and used environment variable ETCD_KEY_FILE=/rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/clients/server.key\n2021-05-29 07:08:30.674633 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:3995\n2021-05-29 07:08:30.674642 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2381\n2021-05-29 07:08:30.674655 I | pkg/flags: recognized and used environment variable ETCD_LOG_OUTPUTS=stdout\n2021-05-29 07:08:30.674665 I | pkg/flags: recognized and used environment variable ETCD_LOGGER=zap\n2021-05-29 07:08:30.674697 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-events-a\n2021-05-29 07:08:30.674706 I | pkg/flags: recognized and used environment variable ETCD_PEER_CERT_FILE=/rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/peers/me.crt\n2021-05-29 07:08:30.674719 I | pkg/flags: recognized and used environment variable ETCD_PEER_CLIENT_CERT_AUTH=true\n2021-05-29 07:08:30.674726 I | pkg/flags: recognized and used environment variable ETCD_PEER_KEY_FILE=/rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/peers/me.key\n2021-05-29 07:08:30.674731 I | pkg/flags: recognized and used environment variable ETCD_PEER_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/peers/ca.crt\n2021-05-29 07:08:30.674768 I | pkg/flags: recognized and used environment variable ETCD_TRUSTED_CA_FILE=/rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/clients/ca.crt\n2021-05-29 07:08:30.674779 W | pkg/flags: unrecognized environment variable ETCD_LISTEN_METRICS_URLS=\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.674Z\",\"caller\":\"embed/etcd.go:117\",\"msg\":\"configuring peer listeners\",\"listen-peer-urls\":[\"https://0.0.0.0:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.674Z\",\"caller\":\"embed/etcd.go:468\",\"msg\":\"starting with peer TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/peers/me.crt, key = /rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/peers/me.key, trusted-ca = /rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/peers/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.675Z\",\"caller\":\"embed/etcd.go:127\",\"msg\":\"configuring client listeners\",\"listen-client-urls\":[\"https://0.0.0.0:3995\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.675Z\",\"caller\":\"embed/etcd.go:302\",\"msg\":\"starting an etcd server\",\"etcd-version\":\"3.4.13\",\"git-sha\":\"ae9734ed2\",\"go-version\":\"go1.12.17\",\"go-os\":\"linux\",\"go-arch\":\"amd64\",\"max-cpu-set\":2,\"max-cpu-available\":2,\"member-initialized\":false,\"name\":\"etcd-events-a\",\"data-dir\":\"/rootfs/mnt/master-vol-0f9e3c5f961485f6d/data/NWfPtESfzkcrfnlnQfAn1w\",\"wal-dir\":\"\",\"wal-dir-dedicated\":\"\",\"member-dir\":\"/rootfs/mnt/master-vol-0f9e3c5f961485f6d/data/NWfPtESfzkcrfnlnQfAn1w/member\",\"force-new-cluster\":false,\"heartbeat-interval\":\"100ms\",\"election-timeout\":\"1s\",\"initial-election-tick-advance\":true,\"snapshot-count\":100000,\"snapshot-catchup-entries\":5000,\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[],\"cors\":[\"*\"],\"host-whitelist\":[\"*\"],\"initial-cluster\":\"etcd-events-a=https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\",\"initial-cluster-state\":\"new\",\"initial-cluster-token\":\"NWfPtESfzkcrfnlnQfAn1w\",\"quota-size-bytes\":2147483648,\"pre-vote\":false,\"initial-corrupt-check\":false,\"corrupt-check-time-interval\":\"0s\",\"auto-compaction-mode\":\"periodic\",\"auto-compaction-retention\":\"0s\",\"auto-compaction-interval\":\"0s\",\"discovery-url\":\"\",\"discovery-proxy\":\"\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.679Z\",\"caller\":\"etcdserver/backend.go:80\",\"msg\":\"opened backend db\",\"path\":\"/rootfs/mnt/master-vol-0f9e3c5f961485f6d/data/NWfPtESfzkcrfnlnQfAn1w/member/snap/db\",\"took\":\"2.938682ms\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.680Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.36.217:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.680Z\",\"caller\":\"netutil/netutil.go:112\",\"msg\":\"resolved URL Host\",\"url\":\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\",\"host\":\"etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\",\"resolved-addr\":\"172.20.36.217:2381\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.684Z\",\"caller\":\"etcdserver/raft.go:486\",\"msg\":\"starting local member\",\"local-member-id\":\"f7d3bf6f91677245\",\"cluster-id\":\"d9d93039f76d6831\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.684Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"f7d3bf6f91677245 switched to configuration voters=()\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.684Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"f7d3bf6f91677245 became follower at term 0\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.684Z\",\"caller\":\"raft/raft.go:383\",\"msg\":\"newRaft f7d3bf6f91677245 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.685Z\",\"caller\":\"raft/raft.go:700\",\"msg\":\"f7d3bf6f91677245 became follower at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.685Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"f7d3bf6f91677245 switched to configuration voters=(17857827433355899461)\"}\n{\"level\":\"warn\",\"ts\":\"2021-05-29T07:08:30.687Z\",\"caller\":\"auth/store.go:1366\",\"msg\":\"simple token is not cryptographically signed\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.691Z\",\"caller\":\"etcdserver/quota.go:98\",\"msg\":\"enabled backend quota with default value\",\"quota-name\":\"v3-applier\",\"quota-size-bytes\":2147483648,\"quota-size\":\"2.1 GB\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.693Z\",\"caller\":\"etcdserver/server.go:803\",\"msg\":\"starting etcd server\",\"local-member-id\":\"f7d3bf6f91677245\",\"local-server-version\":\"3.4.13\",\"cluster-version\":\"to_be_decided\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.694Z\",\"caller\":\"etcdserver/server.go:669\",\"msg\":\"started as single-node; fast-forwarding election ticks\",\"local-member-id\":\"f7d3bf6f91677245\",\"forward-ticks\":9,\"forward-duration\":\"900ms\",\"election-ticks\":10,\"election-timeout\":\"1s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.694Z\",\"caller\":\"raft/raft.go:1530\",\"msg\":\"f7d3bf6f91677245 switched to configuration voters=(17857827433355899461)\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.694Z\",\"caller\":\"membership/cluster.go:392\",\"msg\":\"added member\",\"cluster-id\":\"d9d93039f76d6831\",\"local-member-id\":\"f7d3bf6f91677245\",\"added-peer-id\":\"f7d3bf6f91677245\",\"added-peer-peer-urls\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\"]}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.695Z\",\"caller\":\"embed/etcd.go:711\",\"msg\":\"starting with client TLS\",\"tls-info\":\"cert = /rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/clients/server.crt, key = /rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/clients/server.key, trusted-ca = /rootfs/mnt/master-vol-0f9e3c5f961485f6d/pki/NWfPtESfzkcrfnlnQfAn1w/clients/ca.crt, client-cert-auth = true, crl-file = \",\"cipher-suites\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.695Z\",\"caller\":\"embed/etcd.go:244\",\"msg\":\"now serving peer/client/metrics\",\"local-member-id\":\"f7d3bf6f91677245\",\"initial-advertise-peer-urls\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\"],\"listen-peer-urls\":[\"https://0.0.0.0:2381\"],\"advertise-client-urls\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\"],\"listen-client-urls\":[\"https://0.0.0.0:3995\"],\"listen-metrics-urls\":[]}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:30.695Z\",\"caller\":\"embed/etcd.go:579\",\"msg\":\"serving peer traffic\",\"address\":\"[::]:2381\"}\nI0529 07:08:31.070710    3121 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0529 07:08:31.276246    3121 controller.go:189] starting controller iteration\nI0529 07:08:31.276267    3121 controller.go:266] Broadcasting leadership assertion with token \"fh19lkMhXp1GMdTrrY4WMA\"\nI0529 07:08:31.276587    3121 leadership.go:37] Got LeaderNotification view:<leader:<id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" > leadership_token:\"fh19lkMhXp1GMdTrrY4WMA\" healthy:<id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" > > \nI0529 07:08:31.276733    3121 controller.go:295] I am leader with token \"fh19lkMhXp1GMdTrrY4WMA\"\nI0529 07:08:31.277202    3121 controller.go:705] base client OK for etcd for client urls [https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995]\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.285Z\",\"caller\":\"raft/raft.go:923\",\"msg\":\"f7d3bf6f91677245 is starting a new election at term 1\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.285Z\",\"caller\":\"raft/raft.go:713\",\"msg\":\"f7d3bf6f91677245 became candidate at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.285Z\",\"caller\":\"raft/raft.go:824\",\"msg\":\"f7d3bf6f91677245 received MsgVoteResp from f7d3bf6f91677245 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.285Z\",\"caller\":\"raft/raft.go:765\",\"msg\":\"f7d3bf6f91677245 became leader at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.285Z\",\"caller\":\"raft/node.go:325\",\"msg\":\"raft.node: f7d3bf6f91677245 elected leader f7d3bf6f91677245 at term 2\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.285Z\",\"caller\":\"etcdserver/server.go:2037\",\"msg\":\"published local member to cluster through raft\",\"local-member-id\":\"f7d3bf6f91677245\",\"local-member-attributes\":\"{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995]}\",\"request-path\":\"/0/members/f7d3bf6f91677245/attributes\",\"cluster-id\":\"d9d93039f76d6831\",\"publish-timeout\":\"7s\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.285Z\",\"caller\":\"etcdserver/server.go:2528\",\"msg\":\"setting up initial cluster version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.286Z\",\"caller\":\"embed/serve.go:191\",\"msg\":\"serving client traffic securely\",\"address\":\"[::]:3995\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.293Z\",\"caller\":\"membership/cluster.go:558\",\"msg\":\"set initial cluster version\",\"cluster-id\":\"d9d93039f76d6831\",\"local-member-id\":\"f7d3bf6f91677245\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.293Z\",\"caller\":\"api/capability.go:76\",\"msg\":\"enabled capabilities for version\",\"cluster-version\":\"3.4\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.293Z\",\"caller\":\"etcdserver/server.go:2560\",\"msg\":\"cluster version is updated\",\"cluster-version\":\"3.4\"}\nI0529 07:08:31.303399    3121 controller.go:302] etcd cluster state: etcdClusterState\n  members:\n    {\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\"],\"ID\":\"17857827433355899461\"}\n  peers:\n    etcdClusterPeerInfo{peer=peer{id:\"etcd-events-a\" endpoints:\"172.20.36.217:3997\" }, info=cluster_name:\"etcd-events\" node_configuration:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\" > etcd_state:<cluster:<cluster_token:\"NWfPtESfzkcrfnlnQfAn1w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" quarantined:true > }\nI0529 07:08:31.303533    3121 controller.go:303] etcd cluster members: map[17857827433355899461:{\"name\":\"etcd-events-a\",\"peerURLs\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\"],\"endpoints\":[\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\"],\"ID\":\"17857827433355899461\"}]\nI0529 07:08:31.303660    3121 controller.go:641] sending member map to all peers: members:<name:\"etcd-events-a\" dns:\"etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io\" addresses:\"172.20.36.217\" > \nI0529 07:08:31.303923    3121 etcdserver.go:248] updating hosts: map[172.20.36.217:[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io]]\nI0529 07:08:31.303940    3121 hosts.go:84] hosts update: primary=map[172.20.36.217:[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io]], fallbacks=map[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:[172.20.36.217 172.20.36.217]], final=map[172.20.36.217:[etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io]]\nI0529 07:08:31.304068    3121 hosts.go:181] skipping update of unchanged /etc/hosts\nI0529 07:08:31.304215    3121 commands.go:38] not refreshing commands - TTL not hit\nI0529 07:08:31.304230    3121 s3fs.go:290] Reading file \"s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/control/etcd-cluster-created\"\nI0529 07:08:31.496815    3121 controller.go:395] spec member_count:1 etcd_version:\"3.4.13\" \nI0529 07:08:31.497483    3121 backup.go:134] performing snapshot save to /tmp/296119339/snapshot.db.gz\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.503Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.503Z\",\"caller\":\"v3rpc/maintenance.go:139\",\"msg\":\"sending database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.503Z\",\"caller\":\"v3rpc/maintenance.go:177\",\"msg\":\"sending database sha256 checksum to client\",\"total-bytes\":20480,\"checksum-size\":32}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.504Z\",\"caller\":\"v3rpc/maintenance.go:191\",\"msg\":\"successfully sent database snapshot to client\",\"total-bytes\":20480,\"size\":\"20 kB\",\"took\":\"now\"}\n{\"level\":\"info\",\"ts\":\"2021-05-29T07:08:31.507Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"}\nI0529 07:08:31.508527    3121 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/2021-05-29T07:08:31Z-000001/etcd.backup.gz\"\nI0529 07:08:31.713774    3121 s3fs.go:199] Writing file \"s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events/2021-05-29T07:08:31Z-000001/_etcd_backup.meta\"\nI0529 07:08:31.920122    3121 backup.go:159] backup complete: name:\"2021-05-29T07:08:31Z-000001\" \nI0529 07:08:31.920559    3121 controller.go:937] backup response: name:\"2021-05-29T07:08:31Z-000001\" \nI0529 07:08:31.920706    3121 controller.go:576] took backup: name:\"2021-05-29T07:08:31Z-000001\" \nI0529 07:08:32.117057    3121 vfs.go:118] listed backups in s3://k8s-kops-prow/e2e-459b123097-cb70c.test-cncf-aws.k8s.io/backups/etcd/events: [2021-05-29T07:08:31Z-000001]\nI0529 07:08:32.117081    3121 cleanup.go:166] retaining backup \"2021-05-29T07:08:31Z-000001\"\nI0529 07:08:32.117108    3121 restore.go:98] Setting quarantined state to false\nI0529 07:08:32.117448    3121 etcdserver.go:393] Reconfigure request: header:<leadership_token:\"fh19lkMhXp1GMdTrrY4WMA\" cluster_name:\"etcd-events\" > \nI0529 07:08:32.117552    3121 etcdserver.go:436] Stopping etcd for reconfigure request: header:<leadership_token:\"fh19lkMhXp1GMdTrrY4WMA\" cluster_name:\"etcd-events\" > \nI0529 07:08:32.117569    3121 etcdserver.go:640] killing etcd with datadir /rootfs/mnt/master-vol-0f9e3c5f961485f6d/data/NWfPtESfzkcrfnlnQfAn1w\nI0529 07:08:32.117745    3121 etcdprocess.go:131] Waiting for etcd to exit\nI0529 07:08:32.218060    3121 etcdprocess.go:131] Waiting for etcd to exit\nI0529 07:08:32.218079    3121 etcdprocess.go:136] Exited etcd: signal: killed\nI0529 07:08:32.218217    3121 etcdserver.go:443] updated cluster state: cluster:<cluster_token:\"NWfPtESfzkcrfnlnQfAn1w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0529 07:08:32.218439    3121 etcdserver.go:448] Starting etcd version \"3.4.13\"\nI0529 07:08:32.218453    3121 etcdserver.go:556] starting etcd with state cluster:<cluster_token:\"NWfPtESfzkcrfnlnQfAn1w\" nodes:<name:\"etcd-events-a\" peer_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:2381\" client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:4002\" quarantined_client_urls:\"https://etcd-events-a.internal.e2e-459b123097-cb70c.test-cncf-aws.k8s.io:3995\" tls_enabled:true > > etcd_version:\"3.4.13\" \nI0529 07:08:32.218546    3121 etcdserver.go:565] starting etcd with datadir /rootfs/mnt/master-vol-0f9e3c5f961485f6d/data/NWfPtESfzkcrfnlnQfAn1w\nI0529 07:08:32.218740    3