This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-08-13 04:09
Elapsed30m39s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0813 04:10:10.360570    4036 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0813 04:10:10.362096    4036 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.22.0-alpha.3+v1.22.0-alpha.2-196-gb1e6064501/linux/amd64/kops
I0813 04:10:11.142385    4036 up.go:43] Cleaning up any leaked resources from previous cluster
I0813 04:10:11.142538    4036 dumplogs.go:38] /logs/artifacts/2aff566e-fbec-11eb-9eab-220dac5d1fa2/kops toolbox dump --name e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0813 04:10:11.160956    4056 featureflag.go:173] FeatureFlag "SpecOverrideFlag"=true
I0813 04:10:11.161047    4056 featureflag.go:173] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io" not found
W0813 04:10:11.691396    4036 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0813 04:10:11.691458    4036 down.go:48] /logs/artifacts/2aff566e-fbec-11eb-9eab-220dac5d1fa2/kops delete cluster --name e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --yes
I0813 04:10:11.706898    4067 featureflag.go:173] FeatureFlag "SpecOverrideFlag"=true
I0813 04:10:11.707013    4067 featureflag.go:173] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io" not found
I0813 04:10:12.201677    4036 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/08/13 04:10:12 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0813 04:10:12.209302    4036 http.go:37] curl https://ip.jsb.workers.dev
I0813 04:10:12.298310    4036 up.go:144] /logs/artifacts/2aff566e-fbec-11eb-9eab-220dac5d1fa2/kops create cluster --name e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.4 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210720 --channel=alpha --networking=cilium --container-runtime=docker --admin-access 35.225.211.47/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0813 04:10:12.315469    4076 featureflag.go:173] FeatureFlag "SpecOverrideFlag"=true
I0813 04:10:12.315586    4076 featureflag.go:173] FeatureFlag "AlphaAllowGCE"=true
I0813 04:10:12.340880    4076 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0813 04:10:12.939924    4076 new_cluster.go:1055]  Cloud Provider ID = aws
... skipping 41 lines ...

I0813 04:10:34.773299    4036 up.go:181] /logs/artifacts/2aff566e-fbec-11eb-9eab-220dac5d1fa2/kops validate cluster --name e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0813 04:10:34.791338    4097 featureflag.go:173] FeatureFlag "SpecOverrideFlag"=true
I0813 04:10:34.791431    4097 featureflag.go:173] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io

W0813 04:10:35.760524    4097 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0813 04:10:45.835807    4097 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:10:55.889303    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:11:05.931589    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:11:15.961371    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:11:25.991612    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:11:36.028853    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:11:46.074314    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:11:56.112696    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:12:06.158339    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:12:16.207289    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:12:26.237906    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:12:36.333025    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:12:46.363440    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:12:56.409996    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:13:06.442997    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:13:16.477202    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:13:26.520830    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:13:36.565548    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:13:46.599877    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:13:56.630510    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:14:06.662574    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:14:16.703552    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0813 04:14:26.745260    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 8 lines ...
Machine	i-0c2319c994ff72d6e				machine "i-0c2319c994ff72d6e" has not yet joined cluster
Machine	i-0e5af0fe1d25af3e5				machine "i-0e5af0fe1d25af3e5" has not yet joined cluster
Pod	kube-system/cilium-crxcg			system-node-critical pod "cilium-crxcg" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-5867r		system-cluster-critical pod "coredns-5dc785954d-5867r" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-njhtm	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-njhtm" is pending

Validation Failed
W0813 04:14:38.115840    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 16 lines ...
Pod	kube-system/cilium-mnwnd			system-node-critical pod "cilium-mnwnd" is pending
Pod	kube-system/cilium-wqcjf			system-node-critical pod "cilium-wqcjf" is pending
Pod	kube-system/cilium-zp7bd			system-node-critical pod "cilium-zp7bd" is pending
Pod	kube-system/coredns-5dc785954d-5867r		system-cluster-critical pod "coredns-5dc785954d-5867r" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-njhtm	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-njhtm" is pending

Validation Failed
W0813 04:14:49.045311    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 16 lines ...
Pod	kube-system/cilium-mnwnd			system-node-critical pod "cilium-mnwnd" is not ready (cilium-agent)
Pod	kube-system/cilium-wqcjf			system-node-critical pod "cilium-wqcjf" is not ready (cilium-agent)
Pod	kube-system/cilium-zp7bd			system-node-critical pod "cilium-zp7bd" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-5867r		system-cluster-critical pod "coredns-5dc785954d-5867r" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-njhtm	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-njhtm" is pending

Validation Failed
W0813 04:14:59.935190    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 11 lines ...
Pod	kube-system/cilium-mnwnd			system-node-critical pod "cilium-mnwnd" is not ready (cilium-agent)
Pod	kube-system/cilium-wqcjf			system-node-critical pod "cilium-wqcjf" is not ready (cilium-agent)
Pod	kube-system/cilium-zp7bd			system-node-critical pod "cilium-zp7bd" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-5867r		system-cluster-critical pod "coredns-5dc785954d-5867r" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-njhtm	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-njhtm" is pending

Validation Failed
W0813 04:15:10.876834    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 9 lines ...
KIND	NAME					MESSAGE
Pod	kube-system/cilium-crxcg		system-node-critical pod "cilium-crxcg" is not ready (cilium-agent)
Pod	kube-system/cilium-wqcjf		system-node-critical pod "cilium-wqcjf" is not ready (cilium-agent)
Pod	kube-system/cilium-zp7bd		system-node-critical pod "cilium-zp7bd" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-5867r	system-cluster-critical pod "coredns-5dc785954d-5867r" is pending

Validation Failed
W0813 04:15:21.828626    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 6 lines ...
ip-172-20-60-176.ca-central-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/cilium-crxcg	system-node-critical pod "cilium-crxcg" is not ready (cilium-agent)

Validation Failed
W0813 04:15:32.773059    4097 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 736 lines ...
STEP: Creating a kubernetes client
Aug 13 04:17:48.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
W0813 04:17:50.321416    4733 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug 13 04:17:50.321: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug 13 04:17:50.443: INFO: found topology map[topology.kubernetes.io/zone:ca-central-1a]
Aug 13 04:17:50.443: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Aug 13 04:17:50.443: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:17:52.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4519" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:17:51.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv-protection
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
Aug 13 04:17:55.288: INFO: AfterEach: Cleaning up test resources.
Aug 13 04:17:55.288: INFO: Deleting PersistentVolumeClaim "pvc-zrx58"
Aug 13 04:17:55.339: INFO: Deleting PersistentVolume "hostpath-dzsl5"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:17:55.391: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 68 lines ...
• [SLOW TEST:7.255 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:17:55.866: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 91 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:17:58.738: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
Aug 13 04:17:48.648: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-03bf7cae-a32e-41cc-9bf9-2513cf1e5fc1
STEP: Creating a pod to test consume configMaps
Aug 13 04:17:48.803: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26" in namespace "projected-2027" to be "Succeeded or Failed"
Aug 13 04:17:48.859: INFO: Pod "pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26": Phase="Pending", Reason="", readiness=false. Elapsed: 55.828007ms
Aug 13 04:17:50.891: INFO: Pod "pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088355841s
Aug 13 04:17:52.926: INFO: Pod "pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122996973s
Aug 13 04:17:55.008: INFO: Pod "pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205522702s
Aug 13 04:17:57.062: INFO: Pod "pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.259342347s
Aug 13 04:17:59.122: INFO: Pod "pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.319661701s
STEP: Saw pod success
Aug 13 04:17:59.123: INFO: Pod "pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26" satisfied condition "Succeeded or Failed"
Aug 13 04:17:59.172: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:17:59.828: INFO: Waiting for pod pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26 to disappear
Aug 13 04:17:59.859: INFO: Pod pod-projected-configmaps-4ade541f-4f8a-4b01-a20d-b7b3216d2d26 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.458 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:17:59.969: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
• [SLOW TEST:11.655 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:00.189: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug 13 04:17:50.515: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1" in namespace "projected-2823" to be "Succeeded or Failed"
Aug 13 04:17:50.546: INFO: Pod "downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.002577ms
Aug 13 04:17:52.581: INFO: Pod "downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065304768s
Aug 13 04:17:54.684: INFO: Pod "downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168733866s
Aug 13 04:17:56.792: INFO: Pod "downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276259807s
Aug 13 04:17:58.883: INFO: Pod "downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.367383916s
Aug 13 04:18:00.980: INFO: Pod "downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.464821022s
STEP: Saw pod success
Aug 13 04:18:00.980: INFO: Pod "downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1" satisfied condition "Succeeded or Failed"
Aug 13 04:18:01.037: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1 container client-container: <nil>
STEP: delete the pod
Aug 13 04:18:01.451: INFO: Waiting for pod downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1 to disappear
Aug 13 04:18:01.567: INFO: Pod downwardapi-volume-08123c27-dab1-492b-a086-8e321e9712d1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.836 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:01.749: INFO: Only supported for providers [vsphere] (not aws)
... skipping 165 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:17:57.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 13 04:17:58.052: INFO: Waiting up to 5m0s for pod "pod-774dbbf8-5377-4b81-95b6-b9b77b66009f" in namespace "emptydir-4918" to be "Succeeded or Failed"
Aug 13 04:17:58.115: INFO: Pod "pod-774dbbf8-5377-4b81-95b6-b9b77b66009f": Phase="Pending", Reason="", readiness=false. Elapsed: 62.580893ms
Aug 13 04:18:00.155: INFO: Pod "pod-774dbbf8-5377-4b81-95b6-b9b77b66009f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10286543s
Aug 13 04:18:02.238: INFO: Pod "pod-774dbbf8-5377-4b81-95b6-b9b77b66009f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185995586s
Aug 13 04:18:04.285: INFO: Pod "pod-774dbbf8-5377-4b81-95b6-b9b77b66009f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233529406s
Aug 13 04:18:06.318: INFO: Pod "pod-774dbbf8-5377-4b81-95b6-b9b77b66009f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265855729s
Aug 13 04:18:08.349: INFO: Pod "pod-774dbbf8-5377-4b81-95b6-b9b77b66009f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.296999415s
STEP: Saw pod success
Aug 13 04:18:08.349: INFO: Pod "pod-774dbbf8-5377-4b81-95b6-b9b77b66009f" satisfied condition "Succeeded or Failed"
Aug 13 04:18:08.380: INFO: Trying to get logs from node ip-172-20-46-56.ca-central-1.compute.internal pod pod-774dbbf8-5377-4b81-95b6-b9b77b66009f container test-container: <nil>
STEP: delete the pod
Aug 13 04:18:08.450: INFO: Waiting for pod pod-774dbbf8-5377-4b81-95b6-b9b77b66009f to disappear
Aug 13 04:18:08.480: INFO: Pod pod-774dbbf8-5377-4b81-95b6-b9b77b66009f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is non-root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":3,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:08.581: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 76 lines ...
• [SLOW TEST:20.395 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:09.205: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 151 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Aug 13 04:18:02.368: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6147" to be "Succeeded or Failed"
Aug 13 04:18:02.420: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 50.947852ms
Aug 13 04:18:04.452: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083621302s
Aug 13 04:18:06.485: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116023409s
Aug 13 04:18:08.518: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149020535s
Aug 13 04:18:10.550: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.181468408s
STEP: Saw pod success
Aug 13 04:18:10.550: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug 13 04:18:10.581: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Aug 13 04:18:10.656: INFO: Waiting for pod pod-host-path-test to disappear
Aug 13 04:18:10.687: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.711 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":3,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:22.554 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:11.063: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 48 lines ...
STEP: Destroying namespace "services-2076" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:11.741: INFO: Only supported for providers [azure] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:13.204: INFO: Only supported for providers [azure] (not aws)
... skipping 159 lines ...
Aug 13 04:18:07.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 13 04:18:07.482: INFO: Waiting up to 5m0s for pod "pod-24f9e97a-566c-4594-9cdb-53b2e413a775" in namespace "emptydir-7313" to be "Succeeded or Failed"
Aug 13 04:18:07.513: INFO: Pod "pod-24f9e97a-566c-4594-9cdb-53b2e413a775": Phase="Pending", Reason="", readiness=false. Elapsed: 30.610801ms
Aug 13 04:18:09.544: INFO: Pod "pod-24f9e97a-566c-4594-9cdb-53b2e413a775": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06248384s
Aug 13 04:18:11.577: INFO: Pod "pod-24f9e97a-566c-4594-9cdb-53b2e413a775": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095160857s
Aug 13 04:18:13.608: INFO: Pod "pod-24f9e97a-566c-4594-9cdb-53b2e413a775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126445349s
STEP: Saw pod success
Aug 13 04:18:13.608: INFO: Pod "pod-24f9e97a-566c-4594-9cdb-53b2e413a775" satisfied condition "Succeeded or Failed"
Aug 13 04:18:13.639: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-24f9e97a-566c-4594-9cdb-53b2e413a775 container test-container: <nil>
STEP: delete the pod
Aug 13 04:18:13.741: INFO: Waiting for pod pod-24f9e97a-566c-4594-9cdb-53b2e413a775 to disappear
Aug 13 04:18:13.772: INFO: Pod pod-24f9e97a-566c-4594-9cdb-53b2e413a775 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.543 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:13.847: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
Aug 13 04:18:03.733: INFO: PersistentVolumeClaim pvc-mxtf2 found but phase is Pending instead of Bound.
Aug 13 04:18:05.764: INFO: PersistentVolumeClaim pvc-mxtf2 found and phase=Bound (6.189030129s)
Aug 13 04:18:05.764: INFO: Waiting up to 3m0s for PersistentVolume local-zhm7m to have phase Bound
Aug 13 04:18:05.796: INFO: PersistentVolume local-zhm7m found and phase=Bound (31.987382ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vtb7
STEP: Creating a pod to test subpath
Aug 13 04:18:05.892: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vtb7" in namespace "provisioning-311" to be "Succeeded or Failed"
Aug 13 04:18:05.923: INFO: Pod "pod-subpath-test-preprovisionedpv-vtb7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.4357ms
Aug 13 04:18:07.954: INFO: Pod "pod-subpath-test-preprovisionedpv-vtb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061925848s
Aug 13 04:18:09.990: INFO: Pod "pod-subpath-test-preprovisionedpv-vtb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097873926s
Aug 13 04:18:12.022: INFO: Pod "pod-subpath-test-preprovisionedpv-vtb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129673486s
Aug 13 04:18:14.056: INFO: Pod "pod-subpath-test-preprovisionedpv-vtb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.163962129s
STEP: Saw pod success
Aug 13 04:18:14.056: INFO: Pod "pod-subpath-test-preprovisionedpv-vtb7" satisfied condition "Succeeded or Failed"
Aug 13 04:18:14.087: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-vtb7 container test-container-volume-preprovisionedpv-vtb7: <nil>
STEP: delete the pod
Aug 13 04:18:14.160: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vtb7 to disappear
Aug 13 04:18:14.193: INFO: Pod pod-subpath-test-preprovisionedpv-vtb7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vtb7
Aug 13 04:18:14.193: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vtb7" in namespace "provisioning-311"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:15.419: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-c46cfce5-7597-418f-96d7-06aabdfe5ab2
STEP: Creating a pod to test consume configMaps
Aug 13 04:18:00.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb" in namespace "configmap-2687" to be "Succeeded or Failed"
Aug 13 04:18:00.942: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb": Phase="Pending", Reason="", readiness=false. Elapsed: 63.841442ms
Aug 13 04:18:02.974: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095732796s
Aug 13 04:18:05.006: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127743553s
Aug 13 04:18:07.041: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163015231s
Aug 13 04:18:09.074: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196113171s
Aug 13 04:18:11.105: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.226891446s
Aug 13 04:18:13.137: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.259453528s
Aug 13 04:18:15.168: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.290480855s
STEP: Saw pod success
Aug 13 04:18:15.168: INFO: Pod "pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb" satisfied condition "Succeeded or Failed"
Aug 13 04:18:15.199: INFO: Trying to get logs from node ip-172-20-46-56.ca-central-1.compute.internal pod pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:18:15.293: INFO: Waiting for pod pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb to disappear
Aug 13 04:18:15.329: INFO: Pod pod-configmaps-78731871-3466-4267-8cf4-32b4e86c45cb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:15.438: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 60 lines ...
Aug 13 04:18:15.713: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.219 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 187 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:18:09.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:19.989: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Aug 13 04:17:52.811: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug 13 04:17:52.811: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-cczs
STEP: Creating a pod to test atomic-volume-subpath
Aug 13 04:17:52.847: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-cczs" in namespace "provisioning-2925" to be "Succeeded or Failed"
Aug 13 04:17:52.879: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.698049ms
Aug 13 04:17:54.949: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101301208s
Aug 13 04:17:57.006: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158569563s
Aug 13 04:17:59.078: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230272783s
Aug 13 04:18:01.156: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308425272s
Aug 13 04:18:03.187: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.339552306s
... skipping 4 lines ...
Aug 13 04:18:13.346: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Running", Reason="", readiness=true. Elapsed: 20.498770147s
Aug 13 04:18:15.377: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Running", Reason="", readiness=true. Elapsed: 22.529601089s
Aug 13 04:18:17.409: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Running", Reason="", readiness=true. Elapsed: 24.561631068s
Aug 13 04:18:19.440: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Running", Reason="", readiness=true. Elapsed: 26.592843623s
Aug 13 04:18:21.472: INFO: Pod "pod-subpath-test-inlinevolume-cczs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.624960782s
STEP: Saw pod success
Aug 13 04:18:21.472: INFO: Pod "pod-subpath-test-inlinevolume-cczs" satisfied condition "Succeeded or Failed"
Aug 13 04:18:21.503: INFO: Trying to get logs from node ip-172-20-46-56.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-cczs container test-container-subpath-inlinevolume-cczs: <nil>
STEP: delete the pod
Aug 13 04:18:21.585: INFO: Waiting for pod pod-subpath-test-inlinevolume-cczs to disappear
Aug 13 04:18:21.615: INFO: Pod pod-subpath-test-inlinevolume-cczs no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-cczs
Aug 13 04:18:21.615: INFO: Deleting pod "pod-subpath-test-inlinevolume-cczs" in namespace "provisioning-2925"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:21.750: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 95 lines ...
Aug 13 04:18:13.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 13 04:18:13.534: INFO: Waiting up to 5m0s for pod "pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56" in namespace "emptydir-5913" to be "Succeeded or Failed"
Aug 13 04:18:13.566: INFO: Pod "pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56": Phase="Pending", Reason="", readiness=false. Elapsed: 31.400843ms
Aug 13 04:18:15.597: INFO: Pod "pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062564675s
Aug 13 04:18:17.629: INFO: Pod "pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094384936s
Aug 13 04:18:19.661: INFO: Pod "pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12649617s
Aug 13 04:18:21.693: INFO: Pod "pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158910449s
Aug 13 04:18:23.725: INFO: Pod "pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190485632s
STEP: Saw pod success
Aug 13 04:18:23.725: INFO: Pod "pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56" satisfied condition "Succeeded or Failed"
Aug 13 04:18:23.757: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56 container test-container: <nil>
STEP: delete the pod
Aug 13 04:18:23.839: INFO: Waiting for pod pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56 to disappear
Aug 13 04:18:23.870: INFO: Pod pod-2c4b75a1-db2e-4db9-a6a3-71603abcfe56 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.591 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:24.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9201" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:24.453: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
STEP: Destroying namespace "services-5664" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0813 04:17:50.271081    4765 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug 13 04:17:50.271: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Aug 13 04:17:50.365: INFO: Waiting up to 5m0s for pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88" in namespace "svcaccounts-9723" to be "Succeeded or Failed"
Aug 13 04:17:50.396: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 30.701143ms
Aug 13 04:17:52.428: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062484045s
Aug 13 04:17:54.542: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176352028s
Aug 13 04:17:56.640: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275034897s
Aug 13 04:17:58.741: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.375647926s
STEP: Saw pod success
Aug 13 04:17:58.741: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88" satisfied condition "Succeeded or Failed"
Aug 13 04:17:58.882: INFO: Trying to get logs from node ip-172-20-46-56.ca-central-1.compute.internal pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:17:59.248: INFO: Waiting for pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 to disappear
Aug 13 04:17:59.289: INFO: Pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 no longer exists
STEP: Creating a pod to test service account token: 
Aug 13 04:17:59.328: INFO: Waiting up to 5m0s for pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88" in namespace "svcaccounts-9723" to be "Succeeded or Failed"
Aug 13 04:17:59.362: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 33.883662ms
Aug 13 04:18:01.473: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144958318s
Aug 13 04:18:03.505: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176975578s
Aug 13 04:18:05.537: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2083155s
Aug 13 04:18:07.568: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.239418754s
STEP: Saw pod success
Aug 13 04:18:07.568: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88" satisfied condition "Succeeded or Failed"
Aug 13 04:18:07.598: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:18:07.916: INFO: Waiting for pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 to disappear
Aug 13 04:18:07.946: INFO: Pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 no longer exists
STEP: Creating a pod to test service account token: 
Aug 13 04:18:07.978: INFO: Waiting up to 5m0s for pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88" in namespace "svcaccounts-9723" to be "Succeeded or Failed"
Aug 13 04:18:08.008: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 30.495501ms
Aug 13 04:18:10.039: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061639925s
Aug 13 04:18:12.077: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0995828s
Aug 13 04:18:14.110: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13261719s
STEP: Saw pod success
Aug 13 04:18:14.110: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88" satisfied condition "Succeeded or Failed"
Aug 13 04:18:14.141: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:18:14.218: INFO: Waiting for pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 to disappear
Aug 13 04:18:14.248: INFO: Pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 no longer exists
STEP: Creating a pod to test service account token: 
Aug 13 04:18:14.280: INFO: Waiting up to 5m0s for pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88" in namespace "svcaccounts-9723" to be "Succeeded or Failed"
Aug 13 04:18:14.313: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 33.084229ms
Aug 13 04:18:16.345: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065124116s
Aug 13 04:18:18.378: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097695987s
Aug 13 04:18:20.411: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130460421s
Aug 13 04:18:22.442: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161631102s
Aug 13 04:18:24.507: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.22668373s
STEP: Saw pod success
Aug 13 04:18:24.507: INFO: Pod "test-pod-413f8361-7eb5-408f-9833-47206c1c3c88" satisfied condition "Succeeded or Failed"
Aug 13 04:18:24.563: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:18:24.684: INFO: Waiting for pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 to disappear
Aug 13 04:18:24.720: INFO: Pod test-pod-413f8361-7eb5-408f-9833-47206c1c3c88 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:36.001 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:24.823: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 191 lines ...
Aug 13 04:18:25.024: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.225 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 184 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:25.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-5001" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:26.020: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:27.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2536" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:27.642: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 79 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:17:52.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
Aug 13 04:18:15.189: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:15.220: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:15.443: INFO: Unable to read jessie_udp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:15.480: INFO: Unable to read jessie_tcp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:15.516: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:15.549: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:15.737: INFO: Lookups using dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3 failed for: [wheezy_udp@dns-test-service.dns-7843.svc.cluster.local wheezy_tcp@dns-test-service.dns-7843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local jessie_udp@dns-test-service.dns-7843.svc.cluster.local jessie_tcp@dns-test-service.dns-7843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local]

Aug 13 04:18:20.769: INFO: Unable to read wheezy_udp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:20.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:20.838: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:20.869: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:21.086: INFO: Unable to read jessie_udp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:21.117: INFO: Unable to read jessie_tcp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:21.147: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:21.181: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:21.373: INFO: Lookups using dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3 failed for: [wheezy_udp@dns-test-service.dns-7843.svc.cluster.local wheezy_tcp@dns-test-service.dns-7843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local jessie_udp@dns-test-service.dns-7843.svc.cluster.local jessie_tcp@dns-test-service.dns-7843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local]

Aug 13 04:18:25.790: INFO: Unable to read wheezy_udp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:25.835: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:25.891: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:25.943: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:26.180: INFO: Unable to read jessie_udp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:26.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:26.253: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:26.287: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local from pod dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3: the server could not find the requested resource (get pods dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3)
Aug 13 04:18:26.502: INFO: Lookups using dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3 failed for: [wheezy_udp@dns-test-service.dns-7843.svc.cluster.local wheezy_tcp@dns-test-service.dns-7843.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local jessie_udp@dns-test-service.dns-7843.svc.cluster.local jessie_tcp@dns-test-service.dns-7843.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7843.svc.cluster.local]

Aug 13 04:18:31.382: INFO: DNS probes using dns-7843/dns-test-d4279ba1-f12d-4d1c-b20d-d52ed18480d3 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:39.398 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:47.976 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:189
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:36.809: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Aug 13 04:18:17.973: INFO: PersistentVolumeClaim pvc-6wtf8 found but phase is Pending instead of Bound.
Aug 13 04:18:20.004: INFO: PersistentVolumeClaim pvc-6wtf8 found and phase=Bound (14.258792362s)
Aug 13 04:18:20.004: INFO: Waiting up to 3m0s for PersistentVolume local-s7g4q to have phase Bound
Aug 13 04:18:20.035: INFO: PersistentVolume local-s7g4q found and phase=Bound (31.46989ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-npr2
STEP: Creating a pod to test subpath
Aug 13 04:18:20.129: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-npr2" in namespace "provisioning-3505" to be "Succeeded or Failed"
Aug 13 04:18:20.160: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.870526ms
Aug 13 04:18:22.193: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06419488s
Aug 13 04:18:24.234: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104499288s
Aug 13 04:18:26.265: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136115537s
Aug 13 04:18:28.297: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167911473s
Aug 13 04:18:30.329: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.199932013s
Aug 13 04:18:32.362: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.2329577s
Aug 13 04:18:34.393: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.263864919s
Aug 13 04:18:36.424: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.295076215s
STEP: Saw pod success
Aug 13 04:18:36.424: INFO: Pod "pod-subpath-test-preprovisionedpv-npr2" satisfied condition "Succeeded or Failed"
Aug 13 04:18:36.455: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-npr2 container test-container-subpath-preprovisionedpv-npr2: <nil>
STEP: delete the pod
Aug 13 04:18:36.523: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-npr2 to disappear
Aug 13 04:18:36.555: INFO: Pod pod-subpath-test-preprovisionedpv-npr2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-npr2
Aug 13 04:18:36.555: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-npr2" in namespace "provisioning-3505"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:37.116: INFO: Only supported for providers [openstack] (not aws)
... skipping 97 lines ...
• [SLOW TEST:27.141 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:18:38.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 13 04:18:39.133: INFO: Waiting up to 5m0s for pod "pod-8b9deb6a-781a-4159-991c-190d261ae6da" in namespace "emptydir-7522" to be "Succeeded or Failed"
Aug 13 04:18:39.164: INFO: Pod "pod-8b9deb6a-781a-4159-991c-190d261ae6da": Phase="Pending", Reason="", readiness=false. Elapsed: 30.621751ms
Aug 13 04:18:41.197: INFO: Pod "pod-8b9deb6a-781a-4159-991c-190d261ae6da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063783014s
STEP: Saw pod success
Aug 13 04:18:41.197: INFO: Pod "pod-8b9deb6a-781a-4159-991c-190d261ae6da" satisfied condition "Succeeded or Failed"
Aug 13 04:18:41.230: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-8b9deb6a-781a-4159-991c-190d261ae6da container test-container: <nil>
STEP: delete the pod
Aug 13 04:18:41.306: INFO: Waiting for pod pod-8b9deb6a-781a-4159-991c-190d261ae6da to disappear
Aug 13 04:18:41.339: INFO: Pod pod-8b9deb6a-781a-4159-991c-190d261ae6da no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:41.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7522" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":4,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:18:41.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-0ad4def8-fcd1-4a72-b5f0-40d6b0e01dd7
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:41.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3526" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:41.701: INFO: Only supported for providers [gce gke] (not aws)
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:41.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-8435" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":6,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:42.075: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 127 lines ...
Aug 13 04:18:37.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Aug 13 04:18:37.334: INFO: Waiting up to 5m0s for pod "client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6" in namespace "containers-5573" to be "Succeeded or Failed"
Aug 13 04:18:37.364: INFO: Pod "client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.316885ms
Aug 13 04:18:39.396: INFO: Pod "client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061655363s
Aug 13 04:18:41.428: INFO: Pod "client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093637924s
Aug 13 04:18:43.459: INFO: Pod "client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124587445s
Aug 13 04:18:45.491: INFO: Pod "client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156741046s
Aug 13 04:18:47.523: INFO: Pod "client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.18911294s
STEP: Saw pod success
Aug 13 04:18:47.523: INFO: Pod "client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6" satisfied condition "Succeeded or Failed"
Aug 13 04:18:47.554: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:18:47.623: INFO: Waiting for pod client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6 to disappear
Aug 13 04:18:47.654: INFO: Pod client-containers-de6239b0-52f0-4a9d-a0e9-8cad073adbc6 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.574 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:47.738: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:18:43.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:48.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5759" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:18:42.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Aug 13 04:18:42.294: INFO: Waiting up to 5m0s for pod "security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6" in namespace "security-context-3690" to be "Succeeded or Failed"
Aug 13 04:18:42.327: INFO: Pod "security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.406011ms
Aug 13 04:18:44.359: INFO: Pod "security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065104337s
Aug 13 04:18:46.391: INFO: Pod "security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097219278s
Aug 13 04:18:48.422: INFO: Pod "security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128493694s
STEP: Saw pod success
Aug 13 04:18:48.422: INFO: Pod "security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6" satisfied condition "Succeeded or Failed"
Aug 13 04:18:48.453: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6 container test-container: <nil>
STEP: delete the pod
Aug 13 04:18:48.521: INFO: Waiting for pod security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6 to disappear
Aug 13 04:18:48.553: INFO: Pod security-context-fbcd9bc8-9a8a-4287-8070-2e090dbecbf6 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.519 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":36,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, no capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":4,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:48.884: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:50.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-778" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":3,"skipped":4,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:50.650: INFO: Only supported for providers [gce gke] (not aws)
... skipping 79 lines ...
Aug 13 04:18:41.158: INFO: Pod aws-client still exists
Aug 13 04:18:43.126: INFO: Waiting for pod aws-client to disappear
Aug 13 04:18:43.157: INFO: Pod aws-client still exists
Aug 13 04:18:45.127: INFO: Waiting for pod aws-client to disappear
Aug 13 04:18:45.158: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Aug 13 04:18:45.335: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0f20487c716cde079", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f20487c716cde079 is currently attached to i-0c14d6c681e2c4f4f
	status code: 400, request id: b7044e2c-1608-4341-9710-911c68952c08
Aug 13 04:18:50.659: INFO: Successfully deleted PD "aws://ca-central-1a/vol-0f20487c716cde079".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:50.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5958" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:50.796: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 31 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 25 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:18:36.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:18:51.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5631" for this suite.


• [SLOW TEST:14.318 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:18:20.019: INFO: >>> kubeConfig: /root/.kube/config
... skipping 60 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should contain last line of the log
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:605
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":4,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:18:54.551: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 43 lines ...
STEP: Creating a kubernetes client
Aug 13 04:18:54.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Aug 13 04:18:54.804: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 13 04:19:00.871: INFO: deleting claim "volume-provisioning-8619"/"pvc-ml7hc"
... skipping 6 lines ...

• [SLOW TEST:6.423 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":5,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:01.021: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
STEP: Deleting pod verify-service-up-exec-pod-4v8dw in namespace services-6318
STEP: verifying service-disabled is not up
Aug 13 04:18:18.220: INFO: Creating new host exec pod
Aug 13 04:18:18.285: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:20.316: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:22.316: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Aug 13 04:18:22.316: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6318 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.134.183:80 && echo service-down-failed'
Aug 13 04:18:24.845: INFO: rc: 28
Aug 13 04:18:24.845: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.66.134.183:80 && echo service-down-failed" in pod services-6318/verify-service-down-host-exec-pod: error running /tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6318 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.134.183:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.66.134.183:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6318
STEP: adding service-proxy-name label
STEP: verifying service is not up
Aug 13 04:18:24.946: INFO: Creating new host exec pod
... skipping 3 lines ...
Aug 13 04:18:31.055: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:33.052: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:35.052: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:37.052: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:39.053: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:41.052: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Aug 13 04:18:41.052: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6318 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.242.135:80 && echo service-down-failed'
Aug 13 04:18:43.575: INFO: rc: 28
Aug 13 04:18:43.575: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.242.135:80 && echo service-down-failed" in pod services-6318/verify-service-down-host-exec-pod: error running /tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6318 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.242.135:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.64.242.135:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6318
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Aug 13 04:18:43.673: INFO: Creating new host exec pod
... skipping 15 lines ...
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6318
STEP: Deleting pod verify-service-up-exec-pod-ls9l6 in namespace services-6318
STEP: verifying service-disabled is still not up
Aug 13 04:18:57.318: INFO: Creating new host exec pod
Aug 13 04:18:57.407: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:59.441: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Aug 13 04:18:59.441: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6318 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.134.183:80 && echo service-down-failed'
Aug 13 04:19:01.925: INFO: rc: 28
Aug 13 04:19:01.925: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.66.134.183:80 && echo service-down-failed" in pod services-6318/verify-service-down-host-exec-pod: error running /tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6318 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.134.183:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.66.134.183:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6318
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:19:01.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:73.370 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:02.072: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:19:02.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6382" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:02.610: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 44 lines ...
• [SLOW TEST:36.196 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":5,"skipped":61,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:03.942: INFO: Driver local doesn't support ext3 -- skipping
... skipping 70 lines ...
STEP: Creating pod
Aug 13 04:18:16.778: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 13 04:18:16.811: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-szfnj] to have phase Bound
Aug 13 04:18:16.841: INFO: PersistentVolumeClaim pvc-szfnj found but phase is Pending instead of Bound.
Aug 13 04:18:18.873: INFO: PersistentVolumeClaim pvc-szfnj found and phase=Bound (2.061891468s)
STEP: checking for CSIInlineVolumes feature
Aug 13 04:18:35.116: INFO: Error getting logs for pod inline-volume-ks2z7: the server rejected our request for an unknown reason (get pods inline-volume-ks2z7)
Aug 13 04:18:35.179: INFO: Deleting pod "inline-volume-ks2z7" in namespace "csi-mock-volumes-2861"
Aug 13 04:18:35.218: INFO: Wait up to 5m0s for pod "inline-volume-ks2z7" to be fully deleted
STEP: Deleting the previously created pod
Aug 13 04:18:39.280: INFO: Deleting pod "pvc-volume-tester-qh6p2" in namespace "csi-mock-volumes-2861"
Aug 13 04:18:39.312: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qh6p2" to be fully deleted
STEP: Checking CSI driver logs
Aug 13 04:18:43.409: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2861
Aug 13 04:18:43.409: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 41c745ce-44ec-4127-a150-09b9f16c8c92
Aug 13 04:18:43.409: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Aug 13 04:18:43.409: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Aug 13 04:18:43.409: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-qh6p2
Aug 13 04:18:43.409: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/41c745ce-44ec-4127-a150-09b9f16c8c92/volumes/kubernetes.io~csi/pvc-915766d9-6d2d-476b-93d1-093d4972610e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-qh6p2
Aug 13 04:18:43.409: INFO: Deleting pod "pvc-volume-tester-qh6p2" in namespace "csi-mock-volumes-2861"
STEP: Deleting claim pvc-szfnj
Aug 13 04:18:43.505: INFO: Waiting up to 2m0s for PersistentVolume pvc-915766d9-6d2d-476b-93d1-093d4972610e to get deleted
Aug 13 04:18:43.543: INFO: PersistentVolume pvc-915766d9-6d2d-476b-93d1-093d4972610e found and phase=Released (37.909495ms)
Aug 13 04:18:45.575: INFO: PersistentVolume pvc-915766d9-6d2d-476b-93d1-093d4972610e was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:04.852: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:19:04.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1162" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:04.894: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 173 lines ...
Aug 13 04:19:04.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Aug 13 04:19:05.070: INFO: Waiting up to 5m0s for pod "var-expansion-b5a63281-4330-4a7f-a0c1-c4721fa3a040" in namespace "var-expansion-1788" to be "Succeeded or Failed"
Aug 13 04:19:05.101: INFO: Pod "var-expansion-b5a63281-4330-4a7f-a0c1-c4721fa3a040": Phase="Pending", Reason="", readiness=false. Elapsed: 30.73728ms
Aug 13 04:19:07.133: INFO: Pod "var-expansion-b5a63281-4330-4a7f-a0c1-c4721fa3a040": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062874787s
Aug 13 04:19:09.166: INFO: Pod "var-expansion-b5a63281-4330-4a7f-a0c1-c4721fa3a040": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095349984s
STEP: Saw pod success
Aug 13 04:19:09.166: INFO: Pod "var-expansion-b5a63281-4330-4a7f-a0c1-c4721fa3a040" satisfied condition "Succeeded or Failed"
Aug 13 04:19:09.196: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod var-expansion-b5a63281-4330-4a7f-a0c1-c4721fa3a040 container dapi-container: <nil>
STEP: delete the pod
Aug 13 04:19:09.278: INFO: Waiting for pod var-expansion-b5a63281-4330-4a7f-a0c1-c4721fa3a040 to disappear
Aug 13 04:19:09.310: INFO: Pod var-expansion-b5a63281-4330-4a7f-a0c1-c4721fa3a040 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:19:09.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1788" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:09.383: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 149 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-4e678519-d309-4d40-8b02-dd994ce2c24f
STEP: Creating a pod to test consume secrets
Aug 13 04:19:04.197: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995" in namespace "projected-2570" to be "Succeeded or Failed"
Aug 13 04:19:04.228: INFO: Pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995": Phase="Pending", Reason="", readiness=false. Elapsed: 31.590844ms
Aug 13 04:19:06.260: INFO: Pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063433638s
Aug 13 04:19:08.297: INFO: Pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100407154s
Aug 13 04:19:10.330: INFO: Pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133170323s
Aug 13 04:19:12.362: INFO: Pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16504358s
Aug 13 04:19:14.395: INFO: Pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995": Phase="Pending", Reason="", readiness=false. Elapsed: 10.198614466s
Aug 13 04:19:16.430: INFO: Pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.233235345s
STEP: Saw pod success
Aug 13 04:19:16.430: INFO: Pod "pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995" satisfied condition "Succeeded or Failed"
Aug 13 04:19:16.461: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995 container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug 13 04:19:16.532: INFO: Waiting for pod pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995 to disappear
Aug 13 04:19:16.563: INFO: Pod pod-projected-secrets-c8694474-c681-4d34-a31c-f416dcca2995 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.678 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":67,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 60 lines ...
Aug 13 04:18:17.011: INFO: PersistentVolumeClaim csi-hostpath6tvlq found but phase is Pending instead of Bound.
Aug 13 04:18:19.043: INFO: PersistentVolumeClaim csi-hostpath6tvlq found but phase is Pending instead of Bound.
Aug 13 04:18:21.074: INFO: PersistentVolumeClaim csi-hostpath6tvlq found but phase is Pending instead of Bound.
Aug 13 04:18:23.107: INFO: PersistentVolumeClaim csi-hostpath6tvlq found and phase=Bound (20.356312707s)
STEP: Expanding non-expandable pvc
Aug 13 04:18:23.174: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug 13 04:18:23.269: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:25.334: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:27.332: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:29.331: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:31.331: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:33.336: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:35.337: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:37.332: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:39.332: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:41.339: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:43.332: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:45.332: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:47.332: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:49.332: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:51.332: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:53.334: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug 13 04:18:53.396: INFO: Error updating pvc csi-hostpath6tvlq: persistentvolumeclaims "csi-hostpath6tvlq" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug 13 04:18:53.396: INFO: Deleting PersistentVolumeClaim "csi-hostpath6tvlq"
Aug 13 04:18:53.428: INFO: Waiting up to 5m0s for PersistentVolume pvc-1f594f85-a9af-4ab2-8e0e-a2101478f394 to get deleted
Aug 13 04:18:53.458: INFO: PersistentVolume pvc-1f594f85-a9af-4ab2-8e0e-a2101478f394 found and phase=Released (30.539955ms)
Aug 13 04:18:58.490: INFO: PersistentVolume pvc-1f594f85-a9af-4ab2-8e0e-a2101478f394 was removed
STEP: Deleting sc
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:17:49.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
Aug 13 04:18:31.013: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:33.013: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:35.020: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:37.013: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:39.014: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:41.013: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Aug 13 04:18:41.013: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5381 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.120.133:80 && echo service-down-failed'
Aug 13 04:18:43.499: INFO: rc: 28
Aug 13 04:18:43.499: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.120.133:80 && echo service-down-failed" in pod services-5381/verify-service-down-host-exec-pod: error running /tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5381 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.120.133:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.71.120.133:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5381
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Aug 13 04:18:43.617: INFO: Creating new host exec pod
Aug 13 04:18:43.683: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:45.718: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:18:47.716: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Aug 13 04:18:47.716: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5381 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.10.85:80 && echo service-down-failed'
Aug 13 04:18:50.312: INFO: rc: 28
Aug 13 04:18:50.312: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.10.85:80 && echo service-down-failed" in pod services-5381/verify-service-down-host-exec-pod: error running /tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5381 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.10.85:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.71.10.85:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5381
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Aug 13 04:18:50.430: INFO: Creating new host exec pod
... skipping 18 lines ...
Aug 13 04:19:08.459: INFO: Creating new host exec pod
Aug 13 04:19:08.522: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:19:10.555: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:19:12.554: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:19:14.556: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:19:16.554: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Aug 13 04:19:16.554: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5381 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.120.133:80 && echo service-down-failed'
Aug 13 04:19:19.088: INFO: rc: 28
Aug 13 04:19:19.089: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.120.133:80 && echo service-down-failed" in pod services-5381/verify-service-down-host-exec-pod: error running /tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5381 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.120.133:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.71.120.133:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5381
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:19:19.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:90.108 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":1,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:18:51.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:12.406 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:21.621: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:19:22.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-623" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":4,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:22.117: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 156 lines ...
STEP: Destroying namespace "node-problem-detector-4739" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.253 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 56 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:19:15.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:27.637: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68 lines ...
Aug 13 04:18:36.336: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-xkfss] to have phase Bound
Aug 13 04:18:36.367: INFO: PersistentVolumeClaim pvc-xkfss found and phase=Bound (30.448056ms)
STEP: Deleting the previously created pod
Aug 13 04:18:44.522: INFO: Deleting pod "pvc-volume-tester-svc5v" in namespace "csi-mock-volumes-3878"
Aug 13 04:18:44.554: INFO: Wait up to 5m0s for pod "pvc-volume-tester-svc5v" to be fully deleted
STEP: Checking CSI driver logs
Aug 13 04:18:50.664: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e1fe2877-0b30-407b-99aa-5a40b8faaada/volumes/kubernetes.io~csi/pvc-4185dbe0-a6ea-4cc6-8fa9-c5d475b8fd0b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-svc5v
Aug 13 04:18:50.665: INFO: Deleting pod "pvc-volume-tester-svc5v" in namespace "csi-mock-volumes-3878"
STEP: Deleting claim pvc-xkfss
Aug 13 04:18:50.763: INFO: Waiting up to 2m0s for PersistentVolume pvc-4185dbe0-a6ea-4cc6-8fa9-c5d475b8fd0b to get deleted
Aug 13 04:18:50.796: INFO: PersistentVolume pvc-4185dbe0-a6ea-4cc6-8fa9-c5d475b8fd0b found and phase=Released (32.375127ms)
Aug 13 04:18:52.828: INFO: PersistentVolume pvc-4185dbe0-a6ea-4cc6-8fa9-c5d475b8fd0b was removed
... skipping 55 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-0a527999-c7e7-4552-9f19-5cdfb1616751
STEP: Creating a pod to test consume secrets
Aug 13 04:19:21.743: INFO: Waiting up to 5m0s for pod "pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47" in namespace "secrets-6841" to be "Succeeded or Failed"
Aug 13 04:19:21.775: INFO: Pod "pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47": Phase="Pending", Reason="", readiness=false. Elapsed: 32.368292ms
Aug 13 04:19:23.806: INFO: Pod "pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063625511s
Aug 13 04:19:25.839: INFO: Pod "pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096370327s
Aug 13 04:19:27.871: INFO: Pod "pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128549881s
STEP: Saw pod success
Aug 13 04:19:27.871: INFO: Pod "pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47" satisfied condition "Succeeded or Failed"
Aug 13 04:19:27.904: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47 container secret-volume-test: <nil>
STEP: delete the pod
Aug 13 04:19:27.994: INFO: Waiting for pod pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47 to disappear
Aug 13 04:19:28.026: INFO: Pod pod-secrets-980096cb-b640-49dc-a4bb-c30d03092d47 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.713 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:28.154: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
Aug 13 04:19:22.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 13 04:19:23.155: INFO: Waiting up to 5m0s for pod "pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9" in namespace "emptydir-2901" to be "Succeeded or Failed"
Aug 13 04:19:23.187: INFO: Pod "pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.540627ms
Aug 13 04:19:25.219: INFO: Pod "pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063519918s
Aug 13 04:19:27.250: INFO: Pod "pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094608151s
Aug 13 04:19:29.284: INFO: Pod "pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128370131s
STEP: Saw pod success
Aug 13 04:19:29.284: INFO: Pod "pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9" satisfied condition "Succeeded or Failed"
Aug 13 04:19:29.317: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9 container test-container: <nil>
STEP: delete the pod
Aug 13 04:19:29.393: INFO: Waiting for pod pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9 to disappear
Aug 13 04:19:29.423: INFO: Pod pod-d1db6dad-2ea5-440e-895f-5d0b6f2ffae9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 89 lines ...
• [SLOW TEST:25.023 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:30.008: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:36.288: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 140 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":2,"skipped":9,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:19:29.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":6,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":4,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:39.246: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":2,"skipped":29,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:19:28.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
• [SLOW TEST:12.792 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:40.935: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Aug 13 04:19:34.265: INFO: PersistentVolumeClaim pvc-kqxbl found but phase is Pending instead of Bound.
Aug 13 04:19:36.297: INFO: PersistentVolumeClaim pvc-kqxbl found and phase=Bound (12.235435728s)
Aug 13 04:19:36.297: INFO: Waiting up to 3m0s for PersistentVolume local-bkkrm to have phase Bound
Aug 13 04:19:36.333: INFO: PersistentVolume local-bkkrm found and phase=Bound (35.597958ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cqb8
STEP: Creating a pod to test subpath
Aug 13 04:19:36.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cqb8" in namespace "provisioning-5559" to be "Succeeded or Failed"
Aug 13 04:19:36.459: INFO: Pod "pod-subpath-test-preprovisionedpv-cqb8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.392466ms
Aug 13 04:19:38.491: INFO: Pod "pod-subpath-test-preprovisionedpv-cqb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062219485s
Aug 13 04:19:40.523: INFO: Pod "pod-subpath-test-preprovisionedpv-cqb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094176545s
Aug 13 04:19:42.555: INFO: Pod "pod-subpath-test-preprovisionedpv-cqb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126583149s
STEP: Saw pod success
Aug 13 04:19:42.555: INFO: Pod "pod-subpath-test-preprovisionedpv-cqb8" satisfied condition "Succeeded or Failed"
Aug 13 04:19:42.586: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-cqb8 container test-container-subpath-preprovisionedpv-cqb8: <nil>
STEP: delete the pod
Aug 13 04:19:42.673: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cqb8 to disappear
Aug 13 04:19:42.704: INFO: Pod pod-subpath-test-preprovisionedpv-cqb8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cqb8
Aug 13 04:19:42.704: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cqb8" in namespace "provisioning-5559"
... skipping 55 lines ...
Aug 13 04:19:33.982: INFO: PersistentVolumeClaim pvc-gtjn9 found but phase is Pending instead of Bound.
Aug 13 04:19:36.014: INFO: PersistentVolumeClaim pvc-gtjn9 found and phase=Bound (14.25752701s)
Aug 13 04:19:36.014: INFO: Waiting up to 3m0s for PersistentVolume local-s5fgs to have phase Bound
Aug 13 04:19:36.045: INFO: PersistentVolume local-s5fgs found and phase=Bound (30.891058ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tcsf
STEP: Creating a pod to test subpath
Aug 13 04:19:36.140: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tcsf" in namespace "provisioning-712" to be "Succeeded or Failed"
Aug 13 04:19:36.177: INFO: Pod "pod-subpath-test-preprovisionedpv-tcsf": Phase="Pending", Reason="", readiness=false. Elapsed: 37.352903ms
Aug 13 04:19:38.218: INFO: Pod "pod-subpath-test-preprovisionedpv-tcsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078547032s
Aug 13 04:19:40.253: INFO: Pod "pod-subpath-test-preprovisionedpv-tcsf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11362014s
Aug 13 04:19:42.286: INFO: Pod "pod-subpath-test-preprovisionedpv-tcsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146291988s
STEP: Saw pod success
Aug 13 04:19:42.286: INFO: Pod "pod-subpath-test-preprovisionedpv-tcsf" satisfied condition "Succeeded or Failed"
Aug 13 04:19:42.317: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-tcsf container test-container-volume-preprovisionedpv-tcsf: <nil>
STEP: delete the pod
Aug 13 04:19:42.402: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tcsf to disappear
Aug 13 04:19:42.434: INFO: Pod pod-subpath-test-preprovisionedpv-tcsf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tcsf
Aug 13 04:19:42.434: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tcsf" in namespace "provisioning-712"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:43.595: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 149 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436
    should modify fsGroup if fsGroupPolicy=default
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":3,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:43.776: INFO: Driver local doesn't support ext4 -- skipping
... skipping 58 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-b060af33-2145-4684-a433-64186fbfca27
STEP: Creating a pod to test consume secrets
Aug 13 04:19:38.941: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335" in namespace "projected-7805" to be "Succeeded or Failed"
Aug 13 04:19:38.972: INFO: Pod "pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335": Phase="Pending", Reason="", readiness=false. Elapsed: 30.768548ms
Aug 13 04:19:41.004: INFO: Pod "pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062132883s
Aug 13 04:19:43.036: INFO: Pod "pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094293661s
Aug 13 04:19:45.068: INFO: Pod "pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126574921s
STEP: Saw pod success
Aug 13 04:19:45.068: INFO: Pod "pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335" satisfied condition "Succeeded or Failed"
Aug 13 04:19:45.099: INFO: Trying to get logs from node ip-172-20-46-56.ca-central-1.compute.internal pod pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335 container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug 13 04:19:45.169: INFO: Waiting for pod pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335 to disappear
Aug 13 04:19:45.199: INFO: Pod pod-projected-secrets-037a5fb3-54f0-4b00-a668-771cad527335 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.557 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:45.303: INFO: Only supported for providers [vsphere] (not aws)
... skipping 66 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Aug 13 04:19:39.449: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-7602" to be "Succeeded or Failed"
Aug 13 04:19:39.479: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 30.230332ms
Aug 13 04:19:41.510: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06139063s
Aug 13 04:19:43.541: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092356632s
Aug 13 04:19:45.572: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123714167s
Aug 13 04:19:45.573: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:19:45.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7602" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":5,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:45.686: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 254 lines ...
Aug 13 04:19:30.789: INFO: stderr: ""
Aug 13 04:19:30.789: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Aug 13 04:19:30.789: INFO: Waiting for all frontend pods to be Running.
Aug 13 04:19:35.840: INFO: Waiting for frontend to serve content.
Aug 13 04:19:35.877: INFO: Trying to add a new entry to the guestbook.
Aug 13 04:19:40.912: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Aug 13 04:19:46.025: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 13 04:19:46.088: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7741 delete --grace-period=0 --force -f -'
Aug 13 04:19:46.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 04:19:46.351: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
... skipping 27 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":5,"skipped":16,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:47.601: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Aug 13 04:19:34.072: INFO: PersistentVolumeClaim pvc-mp6xz found but phase is Pending instead of Bound.
Aug 13 04:19:36.105: INFO: PersistentVolumeClaim pvc-mp6xz found and phase=Bound (10.195586128s)
Aug 13 04:19:36.105: INFO: Waiting up to 3m0s for PersistentVolume local-c7jxv to have phase Bound
Aug 13 04:19:36.136: INFO: PersistentVolume local-c7jxv found and phase=Bound (31.111201ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nnjv
STEP: Creating a pod to test subpath
Aug 13 04:19:36.237: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nnjv" in namespace "provisioning-8238" to be "Succeeded or Failed"
Aug 13 04:19:36.269: INFO: Pod "pod-subpath-test-preprovisionedpv-nnjv": Phase="Pending", Reason="", readiness=false. Elapsed: 31.271713ms
Aug 13 04:19:38.309: INFO: Pod "pod-subpath-test-preprovisionedpv-nnjv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071502849s
Aug 13 04:19:40.341: INFO: Pod "pod-subpath-test-preprovisionedpv-nnjv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10372377s
Aug 13 04:19:42.374: INFO: Pod "pod-subpath-test-preprovisionedpv-nnjv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137149836s
Aug 13 04:19:44.421: INFO: Pod "pod-subpath-test-preprovisionedpv-nnjv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183696025s
Aug 13 04:19:46.457: INFO: Pod "pod-subpath-test-preprovisionedpv-nnjv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219216223s
Aug 13 04:19:48.490: INFO: Pod "pod-subpath-test-preprovisionedpv-nnjv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.253057598s
STEP: Saw pod success
Aug 13 04:19:48.490: INFO: Pod "pod-subpath-test-preprovisionedpv-nnjv" satisfied condition "Succeeded or Failed"
Aug 13 04:19:48.522: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-nnjv container test-container-volume-preprovisionedpv-nnjv: <nil>
STEP: delete the pod
Aug 13 04:19:48.615: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nnjv to disappear
Aug 13 04:19:48.649: INFO: Pod pod-subpath-test-preprovisionedpv-nnjv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nnjv
Aug 13 04:19:48.649: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nnjv" in namespace "provisioning-8238"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 173 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:49.891: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:50.466: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 233 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should adopt matching orphans and release non-matching pods
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:165
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":3,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:55.775: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-kw8z
STEP: Creating a pod to test atomic-volume-subpath
Aug 13 04:19:36.731: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-kw8z" in namespace "subpath-3106" to be "Succeeded or Failed"
Aug 13 04:19:36.761: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Pending", Reason="", readiness=false. Elapsed: 30.517073ms
Aug 13 04:19:38.793: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062391078s
Aug 13 04:19:40.825: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 4.094441052s
Aug 13 04:19:42.861: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 6.130249532s
Aug 13 04:19:44.893: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 8.161996805s
Aug 13 04:19:46.925: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 10.194060248s
Aug 13 04:19:48.970: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 12.239423078s
Aug 13 04:19:51.003: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 14.272474927s
Aug 13 04:19:53.036: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 16.305338926s
Aug 13 04:19:55.068: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 18.337369917s
Aug 13 04:19:57.100: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Running", Reason="", readiness=true. Elapsed: 20.369343312s
Aug 13 04:19:59.134: INFO: Pod "pod-subpath-test-downwardapi-kw8z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.403410064s
STEP: Saw pod success
Aug 13 04:19:59.134: INFO: Pod "pod-subpath-test-downwardapi-kw8z" satisfied condition "Succeeded or Failed"
Aug 13 04:19:59.169: INFO: Trying to get logs from node ip-172-20-46-56.ca-central-1.compute.internal pod pod-subpath-test-downwardapi-kw8z container test-container-subpath-downwardapi-kw8z: <nil>
STEP: delete the pod
Aug 13 04:19:59.244: INFO: Waiting for pod pod-subpath-test-downwardapi-kw8z to disappear
Aug 13 04:19:59.275: INFO: Pod pod-subpath-test-downwardapi-kw8z no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-kw8z
Aug 13 04:19:59.275: INFO: Deleting pod "pod-subpath-test-downwardapi-kw8z" in namespace "subpath-3106"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":57,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:19:59.424: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:01.320: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 60 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:19:51.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
• [SLOW TEST:10.850 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":5,"skipped":33,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Aug 13 04:19:48.495: INFO: PersistentVolumeClaim pvc-vkxpr found but phase is Pending instead of Bound.
Aug 13 04:19:50.565: INFO: PersistentVolumeClaim pvc-vkxpr found and phase=Bound (2.101373913s)
Aug 13 04:19:50.566: INFO: Waiting up to 3m0s for PersistentVolume local-rvqf8 to have phase Bound
Aug 13 04:19:50.613: INFO: PersistentVolume local-rvqf8 found and phase=Bound (47.480934ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6chs
STEP: Creating a pod to test subpath
Aug 13 04:19:50.910: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6chs" in namespace "provisioning-1709" to be "Succeeded or Failed"
Aug 13 04:19:50.956: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs": Phase="Pending", Reason="", readiness=false. Elapsed: 46.427074ms
Aug 13 04:19:52.988: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078437725s
Aug 13 04:19:55.021: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11094619s
Aug 13 04:19:57.053: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143115834s
STEP: Saw pod success
Aug 13 04:19:57.053: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs" satisfied condition "Succeeded or Failed"
Aug 13 04:19:57.084: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-6chs container test-container-subpath-preprovisionedpv-6chs: <nil>
STEP: delete the pod
Aug 13 04:19:57.154: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6chs to disappear
Aug 13 04:19:57.186: INFO: Pod pod-subpath-test-preprovisionedpv-6chs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6chs
Aug 13 04:19:57.186: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6chs" in namespace "provisioning-1709"
STEP: Creating pod pod-subpath-test-preprovisionedpv-6chs
STEP: Creating a pod to test subpath
Aug 13 04:19:57.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6chs" in namespace "provisioning-1709" to be "Succeeded or Failed"
Aug 13 04:19:57.287: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs": Phase="Pending", Reason="", readiness=false. Elapsed: 30.955815ms
Aug 13 04:19:59.321: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064732809s
Aug 13 04:20:01.354: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098085685s
STEP: Saw pod success
Aug 13 04:20:01.355: INFO: Pod "pod-subpath-test-preprovisionedpv-6chs" satisfied condition "Succeeded or Failed"
Aug 13 04:20:01.387: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-6chs container test-container-subpath-preprovisionedpv-6chs: <nil>
STEP: delete the pod
Aug 13 04:20:01.457: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6chs to disappear
Aug 13 04:20:01.488: INFO: Pod pod-subpath-test-preprovisionedpv-6chs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6chs
Aug 13 04:20:01.488: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6chs" in namespace "provisioning-1709"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:03.152: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 135 lines ...
STEP: Destroying namespace "apply-3114" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":6,"skipped":37,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:19:43.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:04.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6280" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":7,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:05.198: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 83 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:470
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":4,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:05.611: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":76,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:08.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-4385" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":9,"skipped":81,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:09.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-799" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":10,"skipped":86,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:09.560: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":5,"skipped":51,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:19:09.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
• [SLOW TEST:60.586 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:09.894: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 43 lines ...
STEP: Creating a kubernetes client
Aug 13 04:20:03.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Aug 13 04:20:03.452: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:10.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5055" for this suite.


• [SLOW TEST:6.989 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":5,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:10.258: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 134 lines ...
Aug 13 04:19:50.979: INFO: PersistentVolumeClaim pvc-mhlcf found and phase=Bound (14.272714641s)
Aug 13 04:19:50.979: INFO: Waiting up to 3m0s for PersistentVolume nfs-hxx6n to have phase Bound
Aug 13 04:19:51.014: INFO: PersistentVolume nfs-hxx6n found and phase=Bound (34.456271ms)
STEP: Checking pod has write access to PersistentVolume
Aug 13 04:19:51.119: INFO: Creating nfs test pod
Aug 13 04:19:51.158: INFO: Pod should terminate with exitcode 0 (success)
Aug 13 04:19:51.158: INFO: Waiting up to 5m0s for pod "pvc-tester-89g94" in namespace "pv-2788" to be "Succeeded or Failed"
Aug 13 04:19:51.220: INFO: Pod "pvc-tester-89g94": Phase="Pending", Reason="", readiness=false. Elapsed: 62.674735ms
Aug 13 04:19:53.251: INFO: Pod "pvc-tester-89g94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093647703s
Aug 13 04:19:55.283: INFO: Pod "pvc-tester-89g94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12560006s
STEP: Saw pod success
Aug 13 04:19:55.283: INFO: Pod "pvc-tester-89g94" satisfied condition "Succeeded or Failed"
Aug 13 04:19:55.283: INFO: Pod pvc-tester-89g94 succeeded 
Aug 13 04:19:55.283: INFO: Deleting pod "pvc-tester-89g94" in namespace "pv-2788"
Aug 13 04:19:55.326: INFO: Wait up to 5m0s for pod "pvc-tester-89g94" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Aug 13 04:19:55.356: INFO: Deleting PVC pvc-mhlcf to trigger reclamation of PV 
Aug 13 04:19:55.356: INFO: Deleting PersistentVolumeClaim "pvc-mhlcf"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Aug 13 04:20:02.417: INFO: PersistentVolumeClaim pvc-cpx44 found but phase is Pending instead of Bound.
Aug 13 04:20:04.473: INFO: PersistentVolumeClaim pvc-cpx44 found and phase=Bound (10.218009769s)
Aug 13 04:20:04.473: INFO: Waiting up to 3m0s for PersistentVolume local-lznts to have phase Bound
Aug 13 04:20:04.520: INFO: PersistentVolume local-lznts found and phase=Bound (47.395365ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nvff
STEP: Creating a pod to test subpath
Aug 13 04:20:04.631: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nvff" in namespace "provisioning-8638" to be "Succeeded or Failed"
Aug 13 04:20:04.673: INFO: Pod "pod-subpath-test-preprovisionedpv-nvff": Phase="Pending", Reason="", readiness=false. Elapsed: 42.45426ms
Aug 13 04:20:06.705: INFO: Pod "pod-subpath-test-preprovisionedpv-nvff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074023097s
Aug 13 04:20:08.737: INFO: Pod "pod-subpath-test-preprovisionedpv-nvff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105687778s
Aug 13 04:20:10.768: INFO: Pod "pod-subpath-test-preprovisionedpv-nvff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137062686s
Aug 13 04:20:12.800: INFO: Pod "pod-subpath-test-preprovisionedpv-nvff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169272004s
Aug 13 04:20:14.833: INFO: Pod "pod-subpath-test-preprovisionedpv-nvff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.201730941s
STEP: Saw pod success
Aug 13 04:20:14.833: INFO: Pod "pod-subpath-test-preprovisionedpv-nvff" satisfied condition "Succeeded or Failed"
Aug 13 04:20:14.863: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-nvff container test-container-subpath-preprovisionedpv-nvff: <nil>
STEP: delete the pod
Aug 13 04:20:14.937: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nvff to disappear
Aug 13 04:20:14.968: INFO: Pod pod-subpath-test-preprovisionedpv-nvff no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nvff
Aug 13 04:20:14.969: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nvff" in namespace "provisioning-8638"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:20:12.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-85e06e74-30a3-43b6-8ff1-f57974116062
STEP: Creating a pod to test consume configMaps
Aug 13 04:20:12.397: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-104a42fe-a351-4e1e-9510-59be25403862" in namespace "projected-6344" to be "Succeeded or Failed"
Aug 13 04:20:12.428: INFO: Pod "pod-projected-configmaps-104a42fe-a351-4e1e-9510-59be25403862": Phase="Pending", Reason="", readiness=false. Elapsed: 31.092719ms
Aug 13 04:20:14.462: INFO: Pod "pod-projected-configmaps-104a42fe-a351-4e1e-9510-59be25403862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065160867s
Aug 13 04:20:16.495: INFO: Pod "pod-projected-configmaps-104a42fe-a351-4e1e-9510-59be25403862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097304924s
STEP: Saw pod success
Aug 13 04:20:16.495: INFO: Pod "pod-projected-configmaps-104a42fe-a351-4e1e-9510-59be25403862" satisfied condition "Succeeded or Failed"
Aug 13 04:20:16.526: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-projected-configmaps-104a42fe-a351-4e1e-9510-59be25403862 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:20:16.595: INFO: Waiting for pod pod-projected-configmaps-104a42fe-a351-4e1e-9510-59be25403862 to disappear
Aug 13 04:20:16.625: INFO: Pod pod-projected-configmaps-104a42fe-a351-4e1e-9510-59be25403862 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:16.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6344" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:16.702: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":8,"skipped":50,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:17.336: INFO: Only supported for providers [openstack] (not aws)
... skipping 92 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1345
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":6,"skipped":41,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:18.239: INFO: Only supported for providers [gce gke] (not aws)
... skipping 257 lines ...
• [SLOW TEST:21.473 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":5,"skipped":66,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:20.961: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 94 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support inline execution and attach
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:545
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":8,"skipped":49,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:20:05.344: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Aug 13 04:20:18.224: INFO: PersistentVolumeClaim pvc-mgtbp found but phase is Pending instead of Bound.
Aug 13 04:20:20.256: INFO: PersistentVolumeClaim pvc-mgtbp found and phase=Bound (4.097546506s)
Aug 13 04:20:20.256: INFO: Waiting up to 3m0s for PersistentVolume local-wrsbl to have phase Bound
Aug 13 04:20:20.289: INFO: PersistentVolume local-wrsbl found and phase=Bound (32.864744ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6hjc
STEP: Creating a pod to test subpath
Aug 13 04:20:20.383: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6hjc" in namespace "provisioning-6396" to be "Succeeded or Failed"
Aug 13 04:20:20.414: INFO: Pod "pod-subpath-test-preprovisionedpv-6hjc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.850968ms
Aug 13 04:20:22.449: INFO: Pod "pod-subpath-test-preprovisionedpv-6hjc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066569963s
Aug 13 04:20:24.482: INFO: Pod "pod-subpath-test-preprovisionedpv-6hjc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099161802s
STEP: Saw pod success
Aug 13 04:20:24.482: INFO: Pod "pod-subpath-test-preprovisionedpv-6hjc" satisfied condition "Succeeded or Failed"
Aug 13 04:20:24.513: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-6hjc container test-container-volume-preprovisionedpv-6hjc: <nil>
STEP: delete the pod
Aug 13 04:20:24.603: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6hjc to disappear
Aug 13 04:20:24.634: INFO: Pod pod-subpath-test-preprovisionedpv-6hjc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6hjc
Aug 13 04:20:24.634: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6hjc" in namespace "provisioning-6396"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:25.469: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:25.654: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
Aug 13 04:20:18.636: INFO: PersistentVolumeClaim pvc-vwbrh found but phase is Pending instead of Bound.
Aug 13 04:20:20.670: INFO: PersistentVolumeClaim pvc-vwbrh found and phase=Bound (4.099042258s)
Aug 13 04:20:20.670: INFO: Waiting up to 3m0s for PersistentVolume local-mmfnj to have phase Bound
Aug 13 04:20:20.703: INFO: PersistentVolume local-mmfnj found and phase=Bound (32.398271ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tqvb
STEP: Creating a pod to test subpath
Aug 13 04:20:20.804: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tqvb" in namespace "provisioning-9706" to be "Succeeded or Failed"
Aug 13 04:20:20.836: INFO: Pod "pod-subpath-test-preprovisionedpv-tqvb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.754933ms
Aug 13 04:20:22.870: INFO: Pod "pod-subpath-test-preprovisionedpv-tqvb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065257938s
Aug 13 04:20:24.901: INFO: Pod "pod-subpath-test-preprovisionedpv-tqvb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096811305s
STEP: Saw pod success
Aug 13 04:20:24.902: INFO: Pod "pod-subpath-test-preprovisionedpv-tqvb" satisfied condition "Succeeded or Failed"
Aug 13 04:20:24.933: INFO: Trying to get logs from node ip-172-20-46-56.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-tqvb container test-container-subpath-preprovisionedpv-tqvb: <nil>
STEP: delete the pod
Aug 13 04:20:25.005: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tqvb to disappear
Aug 13 04:20:25.037: INFO: Pod pod-subpath-test-preprovisionedpv-tqvb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tqvb
Aug 13 04:20:25.037: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tqvb" in namespace "provisioning-9706"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:25.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-930" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":8,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:25.984: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 169 lines ...
• [SLOW TEST:14.320 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":5,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:28.056: INFO: Only supported for providers [gce gke] (not aws)
... skipping 33 lines ...
Aug 13 04:19:38.308: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:19:40.301: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Aug 13 04:19:40.332: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3535 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Aug 13 04:19:40.818: INFO: rc: 7
Aug 13 04:19:40.856: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Aug 13 04:19:40.887: INFO: Pod kube-proxy-mode-detector no longer exists
Aug 13 04:19:40.887: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3535 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating service affinity-nodeport-timeout in namespace services-3535
STEP: creating replication controller affinity-nodeport-timeout in namespace services-3535
I0813 04:19:40.957965    4675 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-3535, replica count: 3
I0813 04:19:44.009900    4675 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 04:19:47.010181    4675 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
... skipping 50 lines ...
• [SLOW TEST:60.124 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Aug 13 04:20:28.271: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-77f25d87-d7c0-4fda-8e29-fefb6910ba8f" in namespace "security-context-test-7976" to be "Succeeded or Failed"
Aug 13 04:20:28.306: INFO: Pod "alpine-nnp-nil-77f25d87-d7c0-4fda-8e29-fefb6910ba8f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.76039ms
Aug 13 04:20:30.337: INFO: Pod "alpine-nnp-nil-77f25d87-d7c0-4fda-8e29-fefb6910ba8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066151961s
Aug 13 04:20:32.369: INFO: Pod "alpine-nnp-nil-77f25d87-d7c0-4fda-8e29-fefb6910ba8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097807621s
Aug 13 04:20:34.401: INFO: Pod "alpine-nnp-nil-77f25d87-d7c0-4fda-8e29-fefb6910ba8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129803998s
Aug 13 04:20:34.401: INFO: Pod "alpine-nnp-nil-77f25d87-d7c0-4fda-8e29-fefb6910ba8f" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:34.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7976" for this suite.


... skipping 23 lines ...
Aug 13 04:20:31.922: INFO: The status of Pod pod-update-activedeadlineseconds-a05bc3f9-45eb-4d82-9418-cef4067345d8 is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 13 04:20:32.549: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a05bc3f9-45eb-4d82-9418-cef4067345d8"
Aug 13 04:20:32.549: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a05bc3f9-45eb-4d82-9418-cef4067345d8" in namespace "pods-7547" to be "terminated due to deadline exceeded"
Aug 13 04:20:32.581: INFO: Pod "pod-update-activedeadlineseconds-a05bc3f9-45eb-4d82-9418-cef4067345d8": Phase="Running", Reason="", readiness=true. Elapsed: 31.519081ms
Aug 13 04:20:34.613: INFO: Pod "pod-update-activedeadlineseconds-a05bc3f9-45eb-4d82-9418-cef4067345d8": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.063369932s
Aug 13 04:20:34.613: INFO: Pod "pod-update-activedeadlineseconds-a05bc3f9-45eb-4d82-9418-cef4067345d8" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:34.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7547" for this suite.


• [SLOW TEST:9.009 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":23,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":73,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:20:33.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 13 04:20:33.854: INFO: Waiting up to 5m0s for pod "pod-d721cdee-0c5f-4876-8e70-0613d48388ac" in namespace "emptydir-214" to be "Succeeded or Failed"
Aug 13 04:20:33.885: INFO: Pod "pod-d721cdee-0c5f-4876-8e70-0613d48388ac": Phase="Pending", Reason="", readiness=false. Elapsed: 31.128512ms
Aug 13 04:20:35.917: INFO: Pod "pod-d721cdee-0c5f-4876-8e70-0613d48388ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062833356s
Aug 13 04:20:37.948: INFO: Pod "pod-d721cdee-0c5f-4876-8e70-0613d48388ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094231305s
STEP: Saw pod success
Aug 13 04:20:37.948: INFO: Pod "pod-d721cdee-0c5f-4876-8e70-0613d48388ac" satisfied condition "Succeeded or Failed"
Aug 13 04:20:37.978: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-d721cdee-0c5f-4876-8e70-0613d48388ac container test-container: <nil>
STEP: delete the pod
Aug 13 04:20:38.061: INFO: Waiting for pod pod-d721cdee-0c5f-4876-8e70-0613d48388ac to disappear
Aug 13 04:20:38.092: INFO: Pod pod-d721cdee-0c5f-4876-8e70-0613d48388ac no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:38.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-214" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:38.168: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":6,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:19:46.652: INFO: >>> kubeConfig: /root/.kube/config
... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 04:20:35.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63764425235, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63764425235, loc:(*time.Location)(0x9de2b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63764425235, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63764425235, loc:(*time.Location)(0x9de2b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 04:20:38.620: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:38.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7339" for this suite.
STEP: Destroying namespace "webhook-7339-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

•
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":9,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:39.134: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 20 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:20:30.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:42.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1860" for this suite.


• [SLOW TEST:12.313 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":9,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:42.503: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-6086/configmap-test-745e5c8f-6b29-4faf-a297-0a33823be2ea
STEP: Creating a pod to test consume configMaps
Aug 13 04:20:39.311: INFO: Waiting up to 5m0s for pod "pod-configmaps-63f20eba-91b9-47cf-8c89-52827f902042" in namespace "configmap-6086" to be "Succeeded or Failed"
Aug 13 04:20:39.342: INFO: Pod "pod-configmaps-63f20eba-91b9-47cf-8c89-52827f902042": Phase="Pending", Reason="", readiness=false. Elapsed: 30.570606ms
Aug 13 04:20:41.373: INFO: Pod "pod-configmaps-63f20eba-91b9-47cf-8c89-52827f902042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061670546s
Aug 13 04:20:43.404: INFO: Pod "pod-configmaps-63f20eba-91b9-47cf-8c89-52827f902042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092653791s
STEP: Saw pod success
Aug 13 04:20:43.404: INFO: Pod "pod-configmaps-63f20eba-91b9-47cf-8c89-52827f902042" satisfied condition "Succeeded or Failed"
Aug 13 04:20:43.434: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-configmaps-63f20eba-91b9-47cf-8c89-52827f902042 container env-test: <nil>
STEP: delete the pod
Aug 13 04:20:43.519: INFO: Waiting for pod pod-configmaps-63f20eba-91b9-47cf-8c89-52827f902042 to disappear
Aug 13 04:20:43.549: INFO: Pod pod-configmaps-63f20eba-91b9-47cf-8c89-52827f902042 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:43.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6086" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":52,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:46.058: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 137 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:20:20.570: INFO: >>> kubeConfig: /root/.kube/config
... skipping 20 lines ...
Aug 13 04:20:34.199: INFO: PersistentVolumeClaim pvc-zr8g7 found but phase is Pending instead of Bound.
Aug 13 04:20:36.232: INFO: PersistentVolumeClaim pvc-zr8g7 found and phase=Bound (8.159140554s)
Aug 13 04:20:36.232: INFO: Waiting up to 3m0s for PersistentVolume local-d4vss to have phase Bound
Aug 13 04:20:36.263: INFO: PersistentVolume local-d4vss found and phase=Bound (30.71968ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xx5z
STEP: Creating a pod to test subpath
Aug 13 04:20:36.397: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xx5z" in namespace "provisioning-6817" to be "Succeeded or Failed"
Aug 13 04:20:36.439: INFO: Pod "pod-subpath-test-preprovisionedpv-xx5z": Phase="Pending", Reason="", readiness=false. Elapsed: 41.670087ms
Aug 13 04:20:38.474: INFO: Pod "pod-subpath-test-preprovisionedpv-xx5z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076412941s
Aug 13 04:20:40.514: INFO: Pod "pod-subpath-test-preprovisionedpv-xx5z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116499642s
Aug 13 04:20:42.545: INFO: Pod "pod-subpath-test-preprovisionedpv-xx5z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147865762s
Aug 13 04:20:44.576: INFO: Pod "pod-subpath-test-preprovisionedpv-xx5z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179039795s
Aug 13 04:20:46.608: INFO: Pod "pod-subpath-test-preprovisionedpv-xx5z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210635798s
STEP: Saw pod success
Aug 13 04:20:46.608: INFO: Pod "pod-subpath-test-preprovisionedpv-xx5z" satisfied condition "Succeeded or Failed"
Aug 13 04:20:46.639: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-xx5z container test-container-subpath-preprovisionedpv-xx5z: <nil>
STEP: delete the pod
Aug 13 04:20:46.710: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xx5z to disappear
Aug 13 04:20:46.742: INFO: Pod pod-subpath-test-preprovisionedpv-xx5z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xx5z
Aug 13 04:20:46.742: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xx5z" in namespace "provisioning-6817"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:22.689 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:48.519: INFO: Only supported for providers [vsphere] (not aws)
... skipping 43 lines ...
Aug 13 04:20:19.131: INFO: PersistentVolumeClaim pvc-st8m8 found but phase is Pending instead of Bound.
Aug 13 04:20:21.162: INFO: PersistentVolumeClaim pvc-st8m8 found and phase=Bound (6.127955208s)
Aug 13 04:20:21.162: INFO: Waiting up to 3m0s for PersistentVolume local-jfxjw to have phase Bound
Aug 13 04:20:21.193: INFO: PersistentVolume local-jfxjw found and phase=Bound (30.855643ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qxh4
STEP: Creating a pod to test atomic-volume-subpath
Aug 13 04:20:21.287: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qxh4" in namespace "provisioning-7398" to be "Succeeded or Failed"
Aug 13 04:20:21.318: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.162501ms
Aug 13 04:20:23.355: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067909273s
Aug 13 04:20:25.394: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106797859s
Aug 13 04:20:27.427: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140275276s
Aug 13 04:20:29.461: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174460095s
Aug 13 04:20:31.494: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207103688s
... skipping 4 lines ...
Aug 13 04:20:41.658: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Running", Reason="", readiness=true. Elapsed: 20.371124015s
Aug 13 04:20:43.690: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Running", Reason="", readiness=true. Elapsed: 22.403540678s
Aug 13 04:20:45.727: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Running", Reason="", readiness=true. Elapsed: 24.440433322s
Aug 13 04:20:47.760: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Running", Reason="", readiness=true. Elapsed: 26.472858753s
Aug 13 04:20:49.792: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.505403932s
STEP: Saw pod success
Aug 13 04:20:49.792: INFO: Pod "pod-subpath-test-preprovisionedpv-qxh4" satisfied condition "Succeeded or Failed"
Aug 13 04:20:49.823: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-qxh4 container test-container-subpath-preprovisionedpv-qxh4: <nil>
STEP: delete the pod
Aug 13 04:20:49.899: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qxh4 to disappear
Aug 13 04:20:49.930: INFO: Pod pod-subpath-test-preprovisionedpv-qxh4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qxh4
Aug 13 04:20:49.931: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qxh4" in namespace "provisioning-7398"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":11,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:51.407: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Aug 13 04:20:47.071: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug 13 04:20:47.071: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kt58
STEP: Creating a pod to test subpath
Aug 13 04:20:47.106: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kt58" in namespace "provisioning-3608" to be "Succeeded or Failed"
Aug 13 04:20:47.137: INFO: Pod "pod-subpath-test-inlinevolume-kt58": Phase="Pending", Reason="", readiness=false. Elapsed: 30.827495ms
Aug 13 04:20:49.169: INFO: Pod "pod-subpath-test-inlinevolume-kt58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062563086s
Aug 13 04:20:51.200: INFO: Pod "pod-subpath-test-inlinevolume-kt58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09374733s
STEP: Saw pod success
Aug 13 04:20:51.200: INFO: Pod "pod-subpath-test-inlinevolume-kt58" satisfied condition "Succeeded or Failed"
Aug 13 04:20:51.231: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-kt58 container test-container-subpath-inlinevolume-kt58: <nil>
STEP: delete the pod
Aug 13 04:20:51.301: INFO: Waiting for pod pod-subpath-test-inlinevolume-kt58 to disappear
Aug 13 04:20:51.333: INFO: Pod pod-subpath-test-inlinevolume-kt58 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kt58
Aug 13 04:20:51.334: INFO: Deleting pod "pod-subpath-test-inlinevolume-kt58" in namespace "provisioning-3608"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:20:51.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3608" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":32,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:51.487: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 58 lines ...
Aug 13 04:20:05.781: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1984v684g
STEP: creating a claim
Aug 13 04:20:05.812: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-gfmg
STEP: Creating a pod to test subpath
Aug 13 04:20:05.908: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-gfmg" in namespace "provisioning-1984" to be "Succeeded or Failed"
Aug 13 04:20:05.939: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 31.397074ms
Aug 13 04:20:07.971: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06355s
Aug 13 04:20:10.002: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094498983s
Aug 13 04:20:12.034: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125838588s
Aug 13 04:20:14.065: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157142562s
Aug 13 04:20:16.097: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.189190391s
Aug 13 04:20:18.129: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.221661327s
Aug 13 04:20:20.161: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.253393153s
Aug 13 04:20:22.194: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.28639081s
Aug 13 04:20:24.225: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.317247025s
Aug 13 04:20:26.258: INFO: Pod "pod-subpath-test-dynamicpv-gfmg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.349837416s
STEP: Saw pod success
Aug 13 04:20:26.258: INFO: Pod "pod-subpath-test-dynamicpv-gfmg" satisfied condition "Succeeded or Failed"
Aug 13 04:20:26.288: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-gfmg container test-container-volume-dynamicpv-gfmg: <nil>
STEP: delete the pod
Aug 13 04:20:26.367: INFO: Waiting for pod pod-subpath-test-dynamicpv-gfmg to disappear
Aug 13 04:20:26.397: INFO: Pod pod-subpath-test-dynamicpv-gfmg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-gfmg
Aug 13 04:20:26.397: INFO: Deleting pod "pod-subpath-test-dynamicpv-gfmg" in namespace "provisioning-1984"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:51.865: INFO: Only supported for providers [gce gke] (not aws)
... skipping 32 lines ...
Aug 13 04:20:17.566: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-43724kpw6
STEP: creating a claim
Aug 13 04:20:17.597: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-7mgs
STEP: Creating a pod to test subpath
Aug 13 04:20:17.709: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7mgs" in namespace "provisioning-4372" to be "Succeeded or Failed"
Aug 13 04:20:17.741: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 32.073326ms
Aug 13 04:20:19.772: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063057171s
Aug 13 04:20:21.804: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09544051s
Aug 13 04:20:23.836: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127094598s
Aug 13 04:20:25.868: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159587643s
Aug 13 04:20:27.900: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19086991s
Aug 13 04:20:29.931: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.222080007s
Aug 13 04:20:31.962: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.253508273s
Aug 13 04:20:33.993: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.284252356s
Aug 13 04:20:36.025: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.316416264s
Aug 13 04:20:38.059: INFO: Pod "pod-subpath-test-dynamicpv-7mgs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.350284428s
STEP: Saw pod success
Aug 13 04:20:38.059: INFO: Pod "pod-subpath-test-dynamicpv-7mgs" satisfied condition "Succeeded or Failed"
Aug 13 04:20:38.090: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-7mgs container test-container-subpath-dynamicpv-7mgs: <nil>
STEP: delete the pod
Aug 13 04:20:38.165: INFO: Waiting for pod pod-subpath-test-dynamicpv-7mgs to disappear
Aug 13 04:20:38.195: INFO: Pod pod-subpath-test-dynamicpv-7mgs no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-7mgs
Aug 13 04:20:38.195: INFO: Deleting pod "pod-subpath-test-dynamicpv-7mgs" in namespace "provisioning-4372"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:53.600: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-83617366-ae87-4e2b-b54c-75b8b6f97632
STEP: Creating a pod to test consume configMaps
Aug 13 04:20:46.371: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8" in namespace "projected-3770" to be "Succeeded or Failed"
Aug 13 04:20:46.402: INFO: Pod "pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.523044ms
Aug 13 04:20:48.435: INFO: Pod "pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063404389s
Aug 13 04:20:50.466: INFO: Pod "pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095319991s
Aug 13 04:20:52.499: INFO: Pod "pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127478108s
Aug 13 04:20:54.531: INFO: Pod "pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159554474s
STEP: Saw pod success
Aug 13 04:20:54.531: INFO: Pod "pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8" satisfied condition "Succeeded or Failed"
Aug 13 04:20:54.562: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:20:54.638: INFO: Waiting for pod pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8 to disappear
Aug 13 04:20:54.669: INFO: Pod pod-projected-configmaps-2cfb5aaf-1b22-4383-8da6-f15baa6293b8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.582 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:20:54.750: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 67 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":9,"skipped":56,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":15,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:20:34.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9544
STEP: Creating statefulset with conflicting port in namespace statefulset-9544
STEP: Waiting until pod test-pod will start running in namespace statefulset-9544
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9544
Aug 13 04:20:40.889: INFO: Observed stateful pod in namespace: statefulset-9544, name: ss-0, uid: 23132038-89a1-4a2a-9e56-25dbd64c0cab, status phase: Pending. Waiting for statefulset controller to delete.
Aug 13 04:20:41.557: INFO: Observed stateful pod in namespace: statefulset-9544, name: ss-0, uid: 23132038-89a1-4a2a-9e56-25dbd64c0cab, status phase: Failed. Waiting for statefulset controller to delete.
Aug 13 04:20:41.566: INFO: Observed stateful pod in namespace: statefulset-9544, name: ss-0, uid: 23132038-89a1-4a2a-9e56-25dbd64c0cab, status phase: Failed. Waiting for statefulset controller to delete.
Aug 13 04:20:41.568: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9544
STEP: Removing pod with conflicting port in namespace statefulset-9544
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9544 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
Aug 13 04:20:47.728: INFO: Deleting all statefulset in ns statefulset-9544
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":7,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:11.454 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":12,"skipped":111,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:03.022: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-9pfp
STEP: Creating a pod to test atomic-volume-subpath
Aug 13 04:20:38.431: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9pfp" in namespace "subpath-1340" to be "Succeeded or Failed"
Aug 13 04:20:38.462: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Pending", Reason="", readiness=false. Elapsed: 31.571055ms
Aug 13 04:20:40.496: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065125582s
Aug 13 04:20:42.528: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096906679s
Aug 13 04:20:44.558: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127735516s
Aug 13 04:20:46.590: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Running", Reason="", readiness=true. Elapsed: 8.159485546s
Aug 13 04:20:48.630: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Running", Reason="", readiness=true. Elapsed: 10.19928211s
... skipping 2 lines ...
Aug 13 04:20:54.726: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Running", Reason="", readiness=true. Elapsed: 16.295712663s
Aug 13 04:20:56.759: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Running", Reason="", readiness=true. Elapsed: 18.327938533s
Aug 13 04:20:58.790: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Running", Reason="", readiness=true. Elapsed: 20.359463429s
Aug 13 04:21:00.822: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Running", Reason="", readiness=true. Elapsed: 22.391411s
Aug 13 04:21:02.854: INFO: Pod "pod-subpath-test-configmap-9pfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.423650542s
STEP: Saw pod success
Aug 13 04:21:02.854: INFO: Pod "pod-subpath-test-configmap-9pfp" satisfied condition "Succeeded or Failed"
Aug 13 04:21:02.886: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-configmap-9pfp container test-container-subpath-configmap-9pfp: <nil>
STEP: delete the pod
Aug 13 04:21:02.956: INFO: Waiting for pod pod-subpath-test-configmap-9pfp to disappear
Aug 13 04:21:02.987: INFO: Pod pod-subpath-test-configmap-9pfp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9pfp
Aug 13 04:21:02.987: INFO: Deleting pod "pod-subpath-test-configmap-9pfp" in namespace "subpath-1340"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:03.096: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 59 lines ...
• [SLOW TEST:11.452 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:531
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:03.342: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 67 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:04.397: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Aug 13 04:19:03.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63
W0813 04:19:03.311590    4812 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-3578" for this suite.


• [SLOW TEST:124.488 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:07.626: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
Aug 13 04:20:29.803: INFO: PersistentVolumeClaim csi-hostpathtjf4v found but phase is Pending instead of Bound.
Aug 13 04:20:31.834: INFO: PersistentVolumeClaim csi-hostpathtjf4v found but phase is Pending instead of Bound.
Aug 13 04:20:33.870: INFO: PersistentVolumeClaim csi-hostpathtjf4v found but phase is Pending instead of Bound.
Aug 13 04:20:35.902: INFO: PersistentVolumeClaim csi-hostpathtjf4v found and phase=Bound (8.160643407s)
STEP: Creating pod pod-subpath-test-dynamicpv-6khd
STEP: Creating a pod to test subpath
Aug 13 04:20:35.995: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6khd" in namespace "provisioning-6584" to be "Succeeded or Failed"
Aug 13 04:20:36.026: INFO: Pod "pod-subpath-test-dynamicpv-6khd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.162185ms
Aug 13 04:20:38.060: INFO: Pod "pod-subpath-test-dynamicpv-6khd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064624763s
Aug 13 04:20:40.092: INFO: Pod "pod-subpath-test-dynamicpv-6khd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097042076s
Aug 13 04:20:42.124: INFO: Pod "pod-subpath-test-dynamicpv-6khd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128751589s
STEP: Saw pod success
Aug 13 04:20:42.124: INFO: Pod "pod-subpath-test-dynamicpv-6khd" satisfied condition "Succeeded or Failed"
Aug 13 04:20:42.156: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-6khd container test-container-subpath-dynamicpv-6khd: <nil>
STEP: delete the pod
Aug 13 04:20:42.233: INFO: Waiting for pod pod-subpath-test-dynamicpv-6khd to disappear
Aug 13 04:20:42.263: INFO: Pod pod-subpath-test-dynamicpv-6khd no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-6khd
Aug 13 04:20:42.263: INFO: Deleting pod "pod-subpath-test-dynamicpv-6khd" in namespace "provisioning-6584"
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:10.733: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 62 lines ...
Aug 13 04:21:02.305: INFO: PersistentVolumeClaim pvc-2b5pp found but phase is Pending instead of Bound.
Aug 13 04:21:04.344: INFO: PersistentVolumeClaim pvc-2b5pp found and phase=Bound (10.201866457s)
Aug 13 04:21:04.344: INFO: Waiting up to 3m0s for PersistentVolume local-g6n99 to have phase Bound
Aug 13 04:21:04.375: INFO: PersistentVolume local-g6n99 found and phase=Bound (30.731355ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-fxrr
STEP: Creating a pod to test exec-volume-test
Aug 13 04:21:04.472: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-fxrr" in namespace "volume-5033" to be "Succeeded or Failed"
Aug 13 04:21:04.504: INFO: Pod "exec-volume-test-preprovisionedpv-fxrr": Phase="Pending", Reason="", readiness=false. Elapsed: 31.229419ms
Aug 13 04:21:06.536: INFO: Pod "exec-volume-test-preprovisionedpv-fxrr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06335341s
Aug 13 04:21:08.567: INFO: Pod "exec-volume-test-preprovisionedpv-fxrr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09428825s
Aug 13 04:21:10.599: INFO: Pod "exec-volume-test-preprovisionedpv-fxrr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126815403s
STEP: Saw pod success
Aug 13 04:21:10.599: INFO: Pod "exec-volume-test-preprovisionedpv-fxrr" satisfied condition "Succeeded or Failed"
Aug 13 04:21:10.630: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-fxrr container exec-container-preprovisionedpv-fxrr: <nil>
STEP: delete the pod
Aug 13 04:21:10.702: INFO: Waiting for pod exec-volume-test-preprovisionedpv-fxrr to disappear
Aug 13 04:21:10.733: INFO: Pod exec-volume-test-preprovisionedpv-fxrr no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-fxrr
Aug 13 04:21:10.733: INFO: Deleting pod "exec-volume-test-preprovisionedpv-fxrr" in namespace "volume-5033"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:11.296: INFO: Driver local doesn't support ext3 -- skipping
... skipping 88 lines ...
Aug 13 04:21:02.675: INFO: PersistentVolumeClaim pvc-z79cw found but phase is Pending instead of Bound.
Aug 13 04:21:04.718: INFO: PersistentVolumeClaim pvc-z79cw found and phase=Bound (8.167962085s)
Aug 13 04:21:04.718: INFO: Waiting up to 3m0s for PersistentVolume local-g6s7q to have phase Bound
Aug 13 04:21:04.748: INFO: PersistentVolume local-g6s7q found and phase=Bound (30.490499ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dh4b
STEP: Creating a pod to test subpath
Aug 13 04:21:04.852: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dh4b" in namespace "provisioning-2132" to be "Succeeded or Failed"
Aug 13 04:21:04.883: INFO: Pod "pod-subpath-test-preprovisionedpv-dh4b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.766278ms
Aug 13 04:21:06.914: INFO: Pod "pod-subpath-test-preprovisionedpv-dh4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061703031s
Aug 13 04:21:08.946: INFO: Pod "pod-subpath-test-preprovisionedpv-dh4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09302055s
Aug 13 04:21:10.978: INFO: Pod "pod-subpath-test-preprovisionedpv-dh4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125287615s
STEP: Saw pod success
Aug 13 04:21:10.978: INFO: Pod "pod-subpath-test-preprovisionedpv-dh4b" satisfied condition "Succeeded or Failed"
Aug 13 04:21:11.009: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-dh4b container test-container-subpath-preprovisionedpv-dh4b: <nil>
STEP: delete the pod
Aug 13 04:21:11.082: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dh4b to disappear
Aug 13 04:21:11.113: INFO: Pod pod-subpath-test-preprovisionedpv-dh4b no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dh4b
Aug 13 04:21:11.113: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dh4b" in namespace "provisioning-2132"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:12.274: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
Aug 13 04:20:18.992: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Aug 13 04:20:20.998: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Aug 13 04:20:21.032: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1940 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Aug 13 04:20:21.493: INFO: rc: 7
Aug 13 04:20:21.528: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Aug 13 04:20:21.559: INFO: Pod kube-proxy-mode-detector no longer exists
Aug 13 04:20:21.559: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1940 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating service affinity-clusterip-timeout in namespace services-1940
STEP: creating replication controller affinity-clusterip-timeout in namespace services-1940
I0813 04:20:21.625845    4672 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1940, replica count: 3
I0813 04:20:24.677633    4672 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 04:20:27.678303    4672 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
... skipping 46 lines ...
• [SLOW TEST:55.619 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:12.372: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:12.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-7528" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":7,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 16 lines ...
Aug 13 04:21:03.022: INFO: PersistentVolumeClaim pvc-6lfhx found but phase is Pending instead of Bound.
Aug 13 04:21:05.054: INFO: PersistentVolumeClaim pvc-6lfhx found and phase=Bound (4.093230501s)
Aug 13 04:21:05.054: INFO: Waiting up to 3m0s for PersistentVolume local-rtcgp to have phase Bound
Aug 13 04:21:05.089: INFO: PersistentVolume local-rtcgp found and phase=Bound (35.139348ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-msqr
STEP: Creating a pod to test exec-volume-test
Aug 13 04:21:05.220: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-msqr" in namespace "volume-3157" to be "Succeeded or Failed"
Aug 13 04:21:05.289: INFO: Pod "exec-volume-test-preprovisionedpv-msqr": Phase="Pending", Reason="", readiness=false. Elapsed: 69.098253ms
Aug 13 04:21:07.320: INFO: Pod "exec-volume-test-preprovisionedpv-msqr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100198634s
Aug 13 04:21:09.356: INFO: Pod "exec-volume-test-preprovisionedpv-msqr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135625454s
Aug 13 04:21:11.392: INFO: Pod "exec-volume-test-preprovisionedpv-msqr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171876022s
Aug 13 04:21:13.424: INFO: Pod "exec-volume-test-preprovisionedpv-msqr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.203591358s
STEP: Saw pod success
Aug 13 04:21:13.424: INFO: Pod "exec-volume-test-preprovisionedpv-msqr" satisfied condition "Succeeded or Failed"
Aug 13 04:21:13.455: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-msqr container exec-container-preprovisionedpv-msqr: <nil>
STEP: delete the pod
Aug 13 04:21:13.524: INFO: Waiting for pod exec-volume-test-preprovisionedpv-msqr to disappear
Aug 13 04:21:13.556: INFO: Pod exec-volume-test-preprovisionedpv-msqr no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-msqr
Aug 13 04:21:13.556: INFO: Deleting pod "exec-volume-test-preprovisionedpv-msqr" in namespace "volume-3157"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:14.056: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 196 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":10,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:15.395: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:15.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6192" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":11,"skipped":50,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Aug 13 04:21:03.118: INFO: PersistentVolumeClaim pvc-l4lfh found but phase is Pending instead of Bound.
Aug 13 04:21:05.157: INFO: PersistentVolumeClaim pvc-l4lfh found and phase=Bound (10.195570735s)
Aug 13 04:21:05.157: INFO: Waiting up to 3m0s for PersistentVolume local-mkbn4 to have phase Bound
Aug 13 04:21:05.242: INFO: PersistentVolume local-mkbn4 found and phase=Bound (84.969832ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ch4z
STEP: Creating a pod to test subpath
Aug 13 04:21:05.408: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ch4z" in namespace "provisioning-9610" to be "Succeeded or Failed"
Aug 13 04:21:05.456: INFO: Pod "pod-subpath-test-preprovisionedpv-ch4z": Phase="Pending", Reason="", readiness=false. Elapsed: 48.199375ms
Aug 13 04:21:07.488: INFO: Pod "pod-subpath-test-preprovisionedpv-ch4z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079961423s
Aug 13 04:21:09.521: INFO: Pod "pod-subpath-test-preprovisionedpv-ch4z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11314371s
Aug 13 04:21:11.554: INFO: Pod "pod-subpath-test-preprovisionedpv-ch4z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146009179s
Aug 13 04:21:13.585: INFO: Pod "pod-subpath-test-preprovisionedpv-ch4z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176981735s
Aug 13 04:21:15.619: INFO: Pod "pod-subpath-test-preprovisionedpv-ch4z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210582403s
STEP: Saw pod success
Aug 13 04:21:15.619: INFO: Pod "pod-subpath-test-preprovisionedpv-ch4z" satisfied condition "Succeeded or Failed"
Aug 13 04:21:15.652: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-ch4z container test-container-volume-preprovisionedpv-ch4z: <nil>
STEP: delete the pod
Aug 13 04:21:15.764: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ch4z to disappear
Aug 13 04:21:15.795: INFO: Pod pod-subpath-test-preprovisionedpv-ch4z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ch4z
Aug 13 04:21:15.795: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ch4z" in namespace "provisioning-9610"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:16.367: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-0b4520c0-3bd1-44a0-b49a-5d8503416528
STEP: Creating a pod to test consume configMaps
Aug 13 04:21:12.525: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8" in namespace "projected-8718" to be "Succeeded or Failed"
Aug 13 04:21:12.556: INFO: Pod "pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.702603ms
Aug 13 04:21:14.587: INFO: Pod "pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061721586s
Aug 13 04:21:16.619: INFO: Pod "pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093537512s
Aug 13 04:21:18.650: INFO: Pod "pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124514488s
STEP: Saw pod success
Aug 13 04:21:18.650: INFO: Pod "pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8" satisfied condition "Succeeded or Failed"
Aug 13 04:21:18.680: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8 container agnhost-container: <nil>
STEP: delete the pod
Aug 13 04:21:18.752: INFO: Waiting for pod pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8 to disappear
Aug 13 04:21:18.786: INFO: Pod pod-projected-configmaps-b9897234-a99b-4788-9c54-ee918f1338d8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.558 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug 13 04:21:14.318: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f" in namespace "projected-3700" to be "Succeeded or Failed"
Aug 13 04:21:14.349: INFO: Pod "downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.863302ms
Aug 13 04:21:16.380: INFO: Pod "downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062321167s
Aug 13 04:21:18.411: INFO: Pod "downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0935116s
Aug 13 04:21:20.444: INFO: Pod "downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126453431s
STEP: Saw pod success
Aug 13 04:21:20.445: INFO: Pod "downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f" satisfied condition "Succeeded or Failed"
Aug 13 04:21:20.475: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f container client-container: <nil>
STEP: delete the pod
Aug 13 04:21:20.546: INFO: Waiting for pod downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f to disappear
Aug 13 04:21:20.576: INFO: Pod downwardapi-volume-82b7f9ce-bdeb-4cb3-9d18-988771183f6f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.513 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":29,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1027
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":10,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:23.146: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 75 lines ...
• [SLOW TEST:11.671 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:24.431: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug 13 04:21:20.859: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fee4335-155a-4a8e-818f-37ba3c544519" in namespace "projected-9393" to be "Succeeded or Failed"
Aug 13 04:21:20.889: INFO: Pod "downwardapi-volume-5fee4335-155a-4a8e-818f-37ba3c544519": Phase="Pending", Reason="", readiness=false. Elapsed: 30.477999ms
Aug 13 04:21:22.924: INFO: Pod "downwardapi-volume-5fee4335-155a-4a8e-818f-37ba3c544519": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065001393s
Aug 13 04:21:24.956: INFO: Pod "downwardapi-volume-5fee4335-155a-4a8e-818f-37ba3c544519": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097088996s
STEP: Saw pod success
Aug 13 04:21:24.956: INFO: Pod "downwardapi-volume-5fee4335-155a-4a8e-818f-37ba3c544519" satisfied condition "Succeeded or Failed"
Aug 13 04:21:24.987: INFO: Trying to get logs from node ip-172-20-59-139.ca-central-1.compute.internal pod downwardapi-volume-5fee4335-155a-4a8e-818f-37ba3c544519 container client-container: <nil>
STEP: delete the pod
Aug 13 04:21:25.060: INFO: Waiting for pod downwardapi-volume-5fee4335-155a-4a8e-818f-37ba3c544519 to disappear
Aug 13 04:21:25.092: INFO: Pod downwardapi-volume-5fee4335-155a-4a8e-818f-37ba3c544519 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:25.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9393" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:25.183: INFO: Only supported for providers [azure] (not aws)
... skipping 45 lines ...
Aug 13 04:21:25.653: INFO: AfterEach: Cleaning up test resources.
Aug 13 04:21:25.653: INFO: pvc is nil
Aug 13 04:21:25.653: INFO: Deleting PersistentVolume "hostpath-pl7cb"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":11,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:25.698: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 90 lines ...
Aug 13 04:21:17.939: INFO: PersistentVolumeClaim pvc-56kmg found but phase is Pending instead of Bound.
Aug 13 04:21:19.970: INFO: PersistentVolumeClaim pvc-56kmg found and phase=Bound (10.204428215s)
Aug 13 04:21:19.970: INFO: Waiting up to 3m0s for PersistentVolume local-mzpmp to have phase Bound
Aug 13 04:21:20.001: INFO: PersistentVolume local-mzpmp found and phase=Bound (30.637468ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dvmm
STEP: Creating a pod to test subpath
Aug 13 04:21:20.095: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dvmm" in namespace "provisioning-1442" to be "Succeeded or Failed"
Aug 13 04:21:20.126: INFO: Pod "pod-subpath-test-preprovisionedpv-dvmm": Phase="Pending", Reason="", readiness=false. Elapsed: 30.620712ms
Aug 13 04:21:22.157: INFO: Pod "pod-subpath-test-preprovisionedpv-dvmm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062229154s
Aug 13 04:21:24.193: INFO: Pod "pod-subpath-test-preprovisionedpv-dvmm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097827962s
Aug 13 04:21:26.225: INFO: Pod "pod-subpath-test-preprovisionedpv-dvmm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130145474s
STEP: Saw pod success
Aug 13 04:21:26.225: INFO: Pod "pod-subpath-test-preprovisionedpv-dvmm" satisfied condition "Succeeded or Failed"
Aug 13 04:21:26.256: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-dvmm container test-container-subpath-preprovisionedpv-dvmm: <nil>
STEP: delete the pod
Aug 13 04:21:26.326: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dvmm to disappear
Aug 13 04:21:26.357: INFO: Pod pod-subpath-test-preprovisionedpv-dvmm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dvmm
Aug 13 04:21:26.357: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dvmm" in namespace "provisioning-1442"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":77,"failed":0}

SSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:28.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4069" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":10,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:28.107: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:32.607: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:32.617: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 13 04:21:20.849: INFO: File wheezy_udp@dns-test-service-3.dns-2721.svc.cluster.local from pod  dns-2721/dns-test-3ec8e72a-22bc-4994-83ae-d53d7c9eb288 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 13 04:21:20.880: INFO: File jessie_udp@dns-test-service-3.dns-2721.svc.cluster.local from pod  dns-2721/dns-test-3ec8e72a-22bc-4994-83ae-d53d7c9eb288 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 13 04:21:20.880: INFO: Lookups using dns-2721/dns-test-3ec8e72a-22bc-4994-83ae-d53d7c9eb288 failed for: [wheezy_udp@dns-test-service-3.dns-2721.svc.cluster.local jessie_udp@dns-test-service-3.dns-2721.svc.cluster.local]

Aug 13 04:21:25.915: INFO: File wheezy_udp@dns-test-service-3.dns-2721.svc.cluster.local from pod  dns-2721/dns-test-3ec8e72a-22bc-4994-83ae-d53d7c9eb288 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 13 04:21:25.946: INFO: File jessie_udp@dns-test-service-3.dns-2721.svc.cluster.local from pod  dns-2721/dns-test-3ec8e72a-22bc-4994-83ae-d53d7c9eb288 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 13 04:21:25.946: INFO: Lookups using dns-2721/dns-test-3ec8e72a-22bc-4994-83ae-d53d7c9eb288 failed for: [wheezy_udp@dns-test-service-3.dns-2721.svc.cluster.local jessie_udp@dns-test-service-3.dns-2721.svc.cluster.local]

Aug 13 04:21:30.943: INFO: DNS probes using dns-test-3ec8e72a-22bc-4994-83ae-d53d7c9eb288 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2721.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2721.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
• [SLOW TEST:39.228 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":10,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:35.432: INFO: Only supported for providers [gce gke] (not aws)
... skipping 43 lines ...
Aug 13 04:21:19.215: INFO: PersistentVolumeClaim pvc-sb8gv found but phase is Pending instead of Bound.
Aug 13 04:21:21.249: INFO: PersistentVolumeClaim pvc-sb8gv found and phase=Bound (12.224522494s)
Aug 13 04:21:21.249: INFO: Waiting up to 3m0s for PersistentVolume local-478tk to have phase Bound
Aug 13 04:21:21.280: INFO: PersistentVolume local-478tk found and phase=Bound (30.855637ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9rdw
STEP: Creating a pod to test subpath
Aug 13 04:21:21.391: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9rdw" in namespace "provisioning-6687" to be "Succeeded or Failed"
Aug 13 04:21:21.422: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Pending", Reason="", readiness=false. Elapsed: 30.849792ms
Aug 13 04:21:23.456: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065008008s
Aug 13 04:21:25.491: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100626386s
Aug 13 04:21:27.523: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131955464s
Aug 13 04:21:29.554: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.163381258s
STEP: Saw pod success
Aug 13 04:21:29.554: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw" satisfied condition "Succeeded or Failed"
Aug 13 04:21:29.585: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-9rdw container test-container-subpath-preprovisionedpv-9rdw: <nil>
STEP: delete the pod
Aug 13 04:21:29.657: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9rdw to disappear
Aug 13 04:21:29.688: INFO: Pod pod-subpath-test-preprovisionedpv-9rdw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9rdw
Aug 13 04:21:29.688: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9rdw" in namespace "provisioning-6687"
STEP: Creating pod pod-subpath-test-preprovisionedpv-9rdw
STEP: Creating a pod to test subpath
Aug 13 04:21:29.750: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9rdw" in namespace "provisioning-6687" to be "Succeeded or Failed"
Aug 13 04:21:29.785: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Pending", Reason="", readiness=false. Elapsed: 34.939464ms
Aug 13 04:21:31.817: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066864747s
Aug 13 04:21:33.852: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102172428s
Aug 13 04:21:35.884: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134213684s
STEP: Saw pod success
Aug 13 04:21:35.884: INFO: Pod "pod-subpath-test-preprovisionedpv-9rdw" satisfied condition "Succeeded or Failed"
Aug 13 04:21:35.915: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-9rdw container test-container-subpath-preprovisionedpv-9rdw: <nil>
STEP: delete the pod
Aug 13 04:21:35.988: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9rdw to disappear
Aug 13 04:21:36.019: INFO: Pod pod-subpath-test-preprovisionedpv-9rdw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9rdw
Aug 13 04:21:36.019: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9rdw" in namespace "provisioning-6687"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:37.568: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 52 lines ...
• [SLOW TEST:10.455 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":13,"skipped":65,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":81,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:43.389: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 326 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":7,"skipped":36,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:45.737: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:46.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1819" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:21:46.214: INFO: >>> kubeConfig: /root/.kube/config
... skipping 100 lines ...
Aug 13 04:21:18.566: INFO: PersistentVolumeClaim pvc-spp7k found but phase is Pending instead of Bound.
Aug 13 04:21:20.598: INFO: PersistentVolumeClaim pvc-spp7k found and phase=Bound (12.220774751s)
Aug 13 04:21:20.598: INFO: Waiting up to 3m0s for PersistentVolume aws-c2lp4 to have phase Bound
Aug 13 04:21:20.628: INFO: PersistentVolume aws-c2lp4 found and phase=Bound (30.67086ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-c6sv
STEP: Creating a pod to test exec-volume-test
Aug 13 04:21:20.729: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-c6sv" in namespace "volume-8134" to be "Succeeded or Failed"
Aug 13 04:21:20.759: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv": Phase="Pending", Reason="", readiness=false. Elapsed: 30.283788ms
Aug 13 04:21:22.791: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062281398s
Aug 13 04:21:24.823: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094341935s
Aug 13 04:21:26.855: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126166931s
Aug 13 04:21:28.886: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15718602s
Aug 13 04:21:30.917: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188156787s
Aug 13 04:21:32.957: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.228280694s
Aug 13 04:21:34.989: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.260303947s
STEP: Saw pod success
Aug 13 04:21:34.989: INFO: Pod "exec-volume-test-preprovisionedpv-c6sv" satisfied condition "Succeeded or Failed"
Aug 13 04:21:35.020: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-c6sv container exec-container-preprovisionedpv-c6sv: <nil>
STEP: delete the pod
Aug 13 04:21:35.091: INFO: Waiting for pod exec-volume-test-preprovisionedpv-c6sv to disappear
Aug 13 04:21:35.122: INFO: Pod exec-volume-test-preprovisionedpv-c6sv no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-c6sv
Aug 13 04:21:35.122: INFO: Deleting pod "exec-volume-test-preprovisionedpv-c6sv" in namespace "volume-8134"
STEP: Deleting pv and pvc
Aug 13 04:21:35.153: INFO: Deleting PersistentVolumeClaim "pvc-spp7k"
Aug 13 04:21:35.188: INFO: Deleting PersistentVolume "aws-c2lp4"
Aug 13 04:21:35.391: INFO: Couldn't delete PD "aws://ca-central-1a/vol-03a1b93ac654f22de", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03a1b93ac654f22de is currently attached to i-0c14d6c681e2c4f4f
	status code: 400, request id: 280a31b5-c354-4307-95d9-cb86369978eb
Aug 13 04:21:40.678: INFO: Couldn't delete PD "aws://ca-central-1a/vol-03a1b93ac654f22de", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03a1b93ac654f22de is currently attached to i-0c14d6c681e2c4f4f
	status code: 400, request id: af1dcf95-be4c-472f-8891-cd0041e90707
Aug 13 04:21:45.953: INFO: Couldn't delete PD "aws://ca-central-1a/vol-03a1b93ac654f22de", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03a1b93ac654f22de is currently attached to i-0c14d6c681e2c4f4f
	status code: 400, request id: 913bfcc5-0578-4dfc-94f8-f664ae715b59
Aug 13 04:21:51.231: INFO: Successfully deleted PD "aws://ca-central-1a/vol-03a1b93ac654f22de".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:51.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8134" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:51.321: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 101 lines ...
Aug 13 04:20:42.704: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1099svgfd
STEP: creating a claim
Aug 13 04:20:42.743: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-cwpn
STEP: Creating a pod to test subpath
Aug 13 04:20:42.841: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cwpn" in namespace "provisioning-1099" to be "Succeeded or Failed"
Aug 13 04:20:42.871: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 30.203194ms
Aug 13 04:20:44.903: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061484536s
Aug 13 04:20:46.934: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093355926s
Aug 13 04:20:48.966: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124575922s
Aug 13 04:20:50.997: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155897641s
Aug 13 04:20:53.028: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.18686445s
... skipping 4 lines ...
Aug 13 04:21:03.187: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.345828308s
Aug 13 04:21:05.266: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.424639677s
Aug 13 04:21:07.297: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 24.456044865s
Aug 13 04:21:09.328: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 26.487310781s
Aug 13 04:21:11.360: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.519162602s
STEP: Saw pod success
Aug 13 04:21:11.360: INFO: Pod "pod-subpath-test-dynamicpv-cwpn" satisfied condition "Succeeded or Failed"
Aug 13 04:21:11.397: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-cwpn container test-container-subpath-dynamicpv-cwpn: <nil>
STEP: delete the pod
Aug 13 04:21:11.468: INFO: Waiting for pod pod-subpath-test-dynamicpv-cwpn to disappear
Aug 13 04:21:11.498: INFO: Pod pod-subpath-test-dynamicpv-cwpn no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-cwpn
Aug 13 04:21:11.498: INFO: Deleting pod "pod-subpath-test-dynamicpv-cwpn" in namespace "provisioning-1099"
STEP: Creating pod pod-subpath-test-dynamicpv-cwpn
STEP: Creating a pod to test subpath
Aug 13 04:21:11.560: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cwpn" in namespace "provisioning-1099" to be "Succeeded or Failed"
Aug 13 04:21:11.590: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 30.469424ms
Aug 13 04:21:13.622: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061766037s
Aug 13 04:21:15.657: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097515123s
Aug 13 04:21:17.689: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129328073s
Aug 13 04:21:19.720: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160372832s
Aug 13 04:21:21.752: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19254348s
... skipping 2 lines ...
Aug 13 04:21:27.848: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.288110948s
Aug 13 04:21:29.880: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.319994024s
Aug 13 04:21:31.911: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.351104673s
Aug 13 04:21:33.945: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.385093031s
Aug 13 04:21:35.976: INFO: Pod "pod-subpath-test-dynamicpv-cwpn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.416672624s
STEP: Saw pod success
Aug 13 04:21:35.977: INFO: Pod "pod-subpath-test-dynamicpv-cwpn" satisfied condition "Succeeded or Failed"
Aug 13 04:21:36.007: INFO: Trying to get logs from node ip-172-20-37-248.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-cwpn container test-container-subpath-dynamicpv-cwpn: <nil>
STEP: delete the pod
Aug 13 04:21:36.079: INFO: Waiting for pod pod-subpath-test-dynamicpv-cwpn to disappear
Aug 13 04:21:36.109: INFO: Pod pod-subpath-test-dynamicpv-cwpn no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-cwpn
Aug 13 04:21:36.110: INFO: Deleting pod "pod-subpath-test-dynamicpv-cwpn" in namespace "provisioning-1099"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":81,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:51.555: INFO: Only supported for providers [gce gke] (not aws)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:51.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-668" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:51.872: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:21:52.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:52.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-677" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:53.014: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 99 lines ...
Aug 13 04:21:16.533: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4500sgl6k
STEP: creating a claim
Aug 13 04:21:16.568: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-9xcj
STEP: Creating a pod to test subpath
Aug 13 04:21:16.665: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9xcj" in namespace "provisioning-4500" to be "Succeeded or Failed"
Aug 13 04:21:16.696: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.86625ms
Aug 13 04:21:18.727: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062040259s
Aug 13 04:21:20.758: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093147481s
Aug 13 04:21:22.791: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125960298s
Aug 13 04:21:24.823: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157500488s
Aug 13 04:21:26.857: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192048416s
Aug 13 04:21:28.888: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.22340137s
Aug 13 04:21:30.920: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.254779647s
Aug 13 04:21:32.957: INFO: Pod "pod-subpath-test-dynamicpv-9xcj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.292182044s
STEP: Saw pod success
Aug 13 04:21:32.957: INFO: Pod "pod-subpath-test-dynamicpv-9xcj" satisfied condition "Succeeded or Failed"
Aug 13 04:21:32.994: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-9xcj container test-container-volume-dynamicpv-9xcj: <nil>
STEP: delete the pod
Aug 13 04:21:33.068: INFO: Waiting for pod pod-subpath-test-dynamicpv-9xcj to disappear
Aug 13 04:21:33.099: INFO: Pod pod-subpath-test-dynamicpv-9xcj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9xcj
Aug 13 04:21:33.099: INFO: Deleting pod "pod-subpath-test-dynamicpv-9xcj" in namespace "provisioning-4500"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
Aug 13 04:21:47.484: INFO: PersistentVolumeClaim pvc-2z2tg found but phase is Pending instead of Bound.
Aug 13 04:21:49.515: INFO: PersistentVolumeClaim pvc-2z2tg found and phase=Bound (12.220647053s)
Aug 13 04:21:49.515: INFO: Waiting up to 3m0s for PersistentVolume local-pcc8l to have phase Bound
Aug 13 04:21:49.546: INFO: PersistentVolume local-pcc8l found and phase=Bound (30.707367ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9mnp
STEP: Creating a pod to test subpath
Aug 13 04:21:49.645: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9mnp" in namespace "provisioning-6367" to be "Succeeded or Failed"
Aug 13 04:21:49.677: INFO: Pod "pod-subpath-test-preprovisionedpv-9mnp": Phase="Pending", Reason="", readiness=false. Elapsed: 32.767875ms
Aug 13 04:21:51.709: INFO: Pod "pod-subpath-test-preprovisionedpv-9mnp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064860743s
Aug 13 04:21:53.741: INFO: Pod "pod-subpath-test-preprovisionedpv-9mnp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096531681s
STEP: Saw pod success
Aug 13 04:21:53.741: INFO: Pod "pod-subpath-test-preprovisionedpv-9mnp" satisfied condition "Succeeded or Failed"
Aug 13 04:21:53.772: INFO: Trying to get logs from node ip-172-20-60-176.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-9mnp container test-container-volume-preprovisionedpv-9mnp: <nil>
STEP: delete the pod
Aug 13 04:21:53.842: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9mnp to disappear
Aug 13 04:21:53.873: INFO: Pod pod-subpath-test-preprovisionedpv-9mnp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9mnp
Aug 13 04:21:53.873: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9mnp" in namespace "provisioning-6367"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug 13 04:21:53.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Aug 13 04:21:53.812: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f64d1036-082e-4ca1-9908-5ea9f42ef934" in namespace "security-context-test-7555" to be "Succeeded or Failed"
Aug 13 04:21:53.845: INFO: Pod "busybox-privileged-false-f64d1036-082e-4ca1-9908-5ea9f42ef934": Phase="Pending", Reason="", readiness=false. Elapsed: 32.979034ms
Aug 13 04:21:55.888: INFO: Pod "busybox-privileged-false-f64d1036-082e-4ca1-9908-5ea9f42ef934": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076099814s
Aug 13 04:21:55.888: INFO: Pod "busybox-privileged-false-f64d1036-082e-4ca1-9908-5ea9f42ef934" satisfied condition "Succeeded or Failed"
Aug 13 04:21:55.925: INFO: Got logs for pod "busybox-privileged-false-f64d1036-082e-4ca1-9908-5ea9f42ef934": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug 13 04:21:55.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7555" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug 13 04:21:56.018: INFO: Only supported for providers [gce gke] (not aws)
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
Aug 13 04:21:53.694: INFO: Running '/tmp/kubectl591285266/kubectl --server=https://api.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-665 cluster-info dump'
Aug 13 04:21:55.825: INFO: stderr: ""
Aug 13 04:21:55.829: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12346\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-37-248.ca-central-1.compute.internal\",\n                \"uid\": \"b8eb3617-de9b-429e-ad36-06f9f308a410\",\n                \"resourceVersion\": \"12097\",\n                \"creationTimestamp\": \"2021-08-13T04:14:44Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ca-central-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-37-248.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-37-248.ca-central-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.4.45\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.4.70\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.4.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.4.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.4.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0c14d6c681e2c4f4f\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968644Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866244Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:15:11Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:11Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:46Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:44Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:46Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:44Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:46Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:44Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:46Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:04Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.37.248\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"15.222.46.137\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-15-222-46-137.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec236e3529470a8f0e5d9f4aedde35f9\",\n                    \"systemUUID\": \"ec236e35-2947-0a8f-0e5d-9f4aedde35f9\",\n                    \"bootID\": \"6137a6e8-5902-4e3f-a9d2-fd7da54c603f\",\n                    \"kernelVersion\": \"5.8.0-1041-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                            \"quay.io/cilium/cilium:v1.10.3\"\n                        ],\n                        \"sizeBytes\": 412997784\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2\",\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs:1.2\"\n                        ],\n                        \"sizeBytes\": 263881150\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n                        ],\n                        \"sizeBytes\": 253371792\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 126894770\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 125930239\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 123781643\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 103317730\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 51658444\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 49559562\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 49195358\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 49176472\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 47554275\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 27762720\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 17997083\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 17776649\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 16032814\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 1154361\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 682696\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d65cfa52be4a7381\",\n                    \"kubernetes.io/csi/csi-hostpath-provisioning-6584^cd9965b1-fbed-11eb-9afb-0eabcb9a5f5e\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-provisioning-6584^cd9965b1-fbed-11eb-9afb-0eabcb9a5f5e\",\n                        \"devicePath\": \"\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d65cfa52be4a7381\",\n                        \"devicePath\": \"/dev/xvdbx\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"2e28d31f-be95-430d-8281-fdf613ecdcb3\",\n                \"resourceVersion\": \"7367\",\n                \"creationTimestamp\": \"2021-08-13T04:13:08Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"c5.large\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"master-ca-central-1a\",\n                    \"kops.k8s.io/kops-controller-pki\": \"\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-39-193.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"master\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\",\n                    \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"c5.large\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.0.54\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.0.20\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.0.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.0.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.0.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0173eaa103e1a7ac3\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3784336Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3681936Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:15:05Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:05Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:19:49Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:08Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:19:49Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:08Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:19:49Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:08Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:19:49Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:58Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.39.193\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.183.100.219\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-183-100-219.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec20b13b71d97cca5adba672e4b97364\",\n                    \"systemUUID\": \"ec20b13b-71d9-7cca-5adb-a672e4b97364\",\n                    \"bootID\": \"7363617b-94e9-4fb1-ab7d-c3e8952fb08c\",\n                    \"kernelVersion\": \"5.8.0-1041-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4\",\n                            \"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\"\n                        ],\n                        \"sizeBytes\": 507676854\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                            \"quay.io/cilium/cilium:v1.10.3\"\n                        ],\n                        \"sizeBytes\": 412997784\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 125628829\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 119841468\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2\"\n                        ],\n                        \"sizeBytes\": 112541703\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2\"\n                        ],\n                        \"sizeBytes\": 111612340\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 103317730\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/operator@sha256:5c64867fbf3e09c1f05a44c6b4954ca19563230e89ff29724c7845ca550be66e\",\n                            \"quay.io/cilium/operator:v1.10.3\"\n                        ],\n                        \"sizeBytes\": 98381461\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 50639548\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2\"\n                        ],\n                        \"sizeBytes\": 24018118\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-46-56.ca-central-1.compute.internal\",\n                \"uid\": \"9f8902d9-f202-45e7-802c-703bd481703d\",\n                \"resourceVersion\": \"12254\",\n                \"creationTimestamp\": \"2021-08-13T04:14:39Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ca-central-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-46-56.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-46-56.ca-central-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-volume-expand-6586\\\":\\\"ip-172-20-46-56.ca-central-1.compute.internal\\\"}\",\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.1.94\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.1.143\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.1.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.1.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.1.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0ac802407d70fe433\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968644Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866244Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:15:05Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:05Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:51Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:39Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:51Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:39Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:51Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:39Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:51Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:59Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.46.56\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"35.182.39.72\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-35-182-39-72.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2978679973c3f8023b95ddf8d0da38\",\n                    \"systemUUID\": \"ec297867-9973-c3f8-023b-95ddf8d0da38\",\n                    \"bootID\": \"bd7d33ca-485b-45e2-b199-4d6c7e0f9ca5\",\n                    \"kernelVersion\": \"5.8.0-1041-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                            \"quay.io/cilium/cilium:v1.10.3\"\n                        ],\n                        \"sizeBytes\": 412997784\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2\",\n                            \"k8s.gcr.io/e2e-test-images/volume/nfs:1.2\"\n                        ],\n                        \"sizeBytes\": 263881150\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 125930239\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 123781643\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 103317730\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 51658444\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 51645752\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 49559562\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 49195358\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 49176472\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 46131354\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def\",\n                            \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\"\n                        ],\n                        \"sizeBytes\": 40678121\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 27762720\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 19662887\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 17997083\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 17776649\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 17680993\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 16032814\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 1154361\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 682696\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-59-139.ca-central-1.compute.internal\",\n                \"uid\": \"fd952f53-f6a0-4364-aaa8-23b8e31c5fb5\",\n                \"resourceVersion\": \"11660\",\n                \"creationTimestamp\": \"2021-08-13T04:14:41Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ca-central-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-59-139.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.2.168\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.2.34\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.2.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.2.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.2.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0e5af0fe1d25af3e5\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968644Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866244Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:15:07Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:07Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:22Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:41Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:22Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:41Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:22Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:41Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:22Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:01Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.59.139\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"99.79.60.193\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-99-79-60-193.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2d1bb739bfb57b69626311e28327cc\",\n                    \"systemUUID\": \"ec2d1bb7-39bf-b57b-6962-6311e28327cc\",\n                    \"bootID\": \"33fb8665-2abc-482a-a688-4297eca7c68e\",\n                    \"kernelVersion\": \"5.8.0-1041-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                            \"quay.io/cilium/cilium:v1.10.3\"\n                        ],\n                        \"sizeBytes\": 412997784\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 126894770\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 125930239\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 123781643\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 103317730\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 51645752\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890\",\n                            \"k8s.gcr.io/coredns/coredns:v1.8.4\"\n                        ],\n                        \"sizeBytes\": 47554275\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 46131354\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b\",\n                            \"k8s.gcr.io/e2e-test-images/nonroot:1.1\"\n                        ],\n                        \"sizeBytes\": 42321438\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 19662887\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 17680993\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 16032814\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac\",\n                            \"k8s.gcr.io/e2e-test-images/nonewprivs:1.3\"\n                        ],\n                        \"sizeBytes\": 7107254\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 1154361\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 682696\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-05d09ebce43825081\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-05d09ebce43825081\",\n                        \"devicePath\": \"/dev/xvdcf\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"ip-172-20-60-176.ca-central-1.compute.internal\",\n                \"uid\": \"42fa71fe-0263-4b7d-8f9a-b4fe8999b863\",\n                \"resourceVersion\": \"12210\",\n                \"creationTimestamp\": \"2021-08-13T04:14:43Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"failure-domain.beta.kubernetes.io/region\": \"ca-central-1\",\n                    \"failure-domain.beta.kubernetes.io/zone\": \"ca-central-1a\",\n                    \"kops.k8s.io/instancegroup\": \"nodes-ca-central-1a\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"ip-172-20-60-176.ca-central-1.compute.internal\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/role\": \"node\",\n                    \"node-role.kubernetes.io/node\": \"\",\n                    \"node.kubernetes.io/instance-type\": \"t3.medium\",\n                    \"topology.hostpath.csi/node\": \"ip-172-20-60-176.ca-central-1.compute.internal\",\n                    \"topology.kubernetes.io/region\": \"ca-central-1\",\n                    \"topology.kubernetes.io/zone\": \"ca-central-1a\"\n                },\n                \"annotations\": {\n                    \"io.cilium.network.ipv4-cilium-host\": \"100.96.3.62\",\n                    \"io.cilium.network.ipv4-health-ip\": \"100.96.3.252\",\n                    \"io.cilium.network.ipv4-pod-cidr\": \"100.96.3.0/24\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"100.96.3.0/24\",\n                \"podCIDRs\": [\n                    \"100.96.3.0/24\"\n                ],\n                \"providerID\": \"aws:///ca-central-1a/i-0c2319c994ff72d6e\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"48725632Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3968644Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"attachable-volumes-aws-ebs\": \"25\",\n                    \"cpu\": \"2\",\n                    \"ephemeral-storage\": \"44905542377\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"3866244Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"NetworkUnavailable\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:15:09Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:09Z\",\n                        \"reason\": \"CiliumIsUp\",\n                        \"message\": \"Cilium is running on this node\"\n                    },\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:35Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:43Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:35Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:43Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:35Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:43Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-08-13T04:21:35Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:04Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status. AppArmor enabled\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.20.60.176\"\n                    },\n                    {\n                        \"type\": \"ExternalIP\",\n                        \"address\": \"99.79.32.230\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"InternalDNS\",\n                        \"address\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n                    },\n                    {\n                        \"type\": \"ExternalDNS\",\n                        \"address\": \"ec2-99-79-32-230.ca-central-1.compute.amazonaws.com\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"ec2c366204f85ed47e6d19962f112e92\",\n                    \"systemUUID\": \"ec2c3662-04f8-5ed4-7e6d-19962f112e92\",\n                    \"bootID\": \"51fbc42b-acec-4d84-825c-8b96fd1f4f1d\",\n                    \"kernelVersion\": \"5.8.0-1041-aws\",\n                    \"osImage\": \"Ubuntu 20.04.2 LTS\",\n                    \"containerRuntimeVersion\": \"docker://20.10.8\",\n                    \"kubeletVersion\": \"v1.21.4\",\n                    \"kubeProxyVersion\": \"v1.21.4\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                            \"quay.io/cilium/cilium:v1.10.3\"\n                        ],\n                        \"sizeBytes\": 412997784\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n                        ],\n                        \"sizeBytes\": 253371792\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n                        ],\n                        \"sizeBytes\": 126894770\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n                        ],\n                        \"sizeBytes\": 125930239\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n                        ],\n                        \"sizeBytes\": 123781643\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy-amd64:v1.21.4\"\n                        ],\n                        \"sizeBytes\": 103317730\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1\"\n                        ],\n                        \"sizeBytes\": 51658444\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n                            \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 51645752\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782\",\n                            \"k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0\"\n                        ],\n                        \"sizeBytes\": 49559562\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09\",\n                            \"k8s.gcr.io/sig-storage/csi-attacher:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 49195358\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a\",\n                            \"k8s.gcr.io/sig-storage/csi-resizer:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 49176472\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5\",\n                            \"k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0\"\n                        ],\n                        \"sizeBytes\": 27762720\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 19662887\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108\",\n                            \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1\"\n                        ],\n                        \"sizeBytes\": 17997083\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994\",\n                            \"k8s.gcr.io/sig-storage/livenessprobe:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 17776649\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n                            \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n                        ],\n                        \"sizeBytes\": 17680993\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n                        ],\n                        \"sizeBytes\": 16032814\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n                        ],\n                        \"sizeBytes\": 1154361\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07\",\n                            \"k8s.gcr.io/pause:3.5\"\n                        ],\n                        \"sizeBytes\": 682696\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 682696\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0242c3e8b09f5f827\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0242c3e8b09f5f827\",\n                        \"devicePath\": \"/dev/xvdch\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d5795c41bd38e9e9\",\n                        \"devicePath\": \"/dev/xvdbc\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03e2633b933d42409\",\n                        \"devicePath\": \"/dev/xvdbj\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"5267\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac272fa9950a7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"eed14c89-73c6-4906-b658-4ae6af4230dd\",\n                \"resourceVersion\": \"56\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"435\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-crxcg to ip-172-20-39-193.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac273183edd56\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f70da2ee-a8b9-4d9e-a958-da75f18e91b0\",\n                \"resourceVersion\": \"69\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/cilium/cilium:v1.10.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac274c86de7b6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"93572f9e-f14d-4553-b47d-09baa58b20cc\",\n                \"resourceVersion\": \"74\",\n                \"creationTimestamp\": \"2021-08-13T04:13:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/cilium/cilium:v1.10.3\\\" in 7.250829747s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:42Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac275083bb8ba\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"17da4cc4-35bb-4d20-86a2-cfaeab1120bb\",\n                \"resourceVersion\": \"75\",\n                \"creationTimestamp\": \"2021-08-13T04:13:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac275106117b9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"90000c4f-3516-4f30-935c-f822a7ce4e81\",\n                \"resourceVersion\": \"76\",\n                \"creationTimestamp\": \"2021-08-13T04:13:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac27521e6a6a8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4913a4fa-5f1a-42b0-862d-f828fc7556be\",\n                \"resourceVersion\": \"77\",\n                \"creationTimestamp\": \"2021-08-13T04:13:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/cilium/cilium:v1.10.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac27524d67230\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"822a8bf7-9785-44d3-a3de-a3e9f370aff2\",\n                \"resourceVersion\": \"78\",\n                \"creationTimestamp\": \"2021-08-13T04:13:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac2752b071af5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cf658fe6-e5d9-4a7f-b220-1a1a04dcea48\",\n                \"resourceVersion\": \"79\",\n                \"creationTimestamp\": \"2021-08-13T04:13:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg.169ac275c5e8fe09\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3534fd1d-05fa-43c1-be2f-337d60a22740\",\n                \"resourceVersion\": \"122\",\n                \"creationTimestamp\": \"2021-08-13T04:13:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-crxcg\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"436\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Startup probe failed: Get \\\"http://127.0.0.1:9876/healthz\\\": dial tcp 127.0.0.1:9876: connect: connection refused\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:46Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:20Z\",\n            \"count\": 18,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac2822d276fe3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d0c2e2e5-d93c-4489-93c9-6f1eb6dbb09d\",\n                \"resourceVersion\": \"134\",\n                \"creationTimestamp\": \"2021-08-13T04:14:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"619\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-d2ptz to ip-172-20-46-56.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:39Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac282f683416a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e4b14830-25a5-4d99-a67e-fa2eea972ecb\",\n                \"resourceVersion\": \"147\",\n                \"creationTimestamp\": \"2021-08-13T04:14:42Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/cilium/cilium:v1.10.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:42Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:42Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac284966168d3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c1d19650-55a6-4cce-ba3e-3231741b2b6a\",\n                \"resourceVersion\": \"175\",\n                \"creationTimestamp\": \"2021-08-13T04:14:49Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/cilium/cilium:v1.10.3\\\" in 6.977086877s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:49Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:49Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac284f3b30120\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cf2921c1-e908-4e73-833e-62b6e48ac5c6\",\n                \"resourceVersion\": \"178\",\n                \"creationTimestamp\": \"2021-08-13T04:14:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:51Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac284fb3bdaae\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5234ed52-aeeb-4783-b818-91513f2fcdc0\",\n                \"resourceVersion\": \"179\",\n                \"creationTimestamp\": \"2021-08-13T04:14:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:51Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac285375d4263\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e9d507c9-ba8d-4025-a6e8-ed1f442e71b9\",\n                \"resourceVersion\": \"181\",\n                \"creationTimestamp\": \"2021-08-13T04:14:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/cilium/cilium:v1.10.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:52Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac2853983b198\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9fb5153b-0904-44fa-a248-aab5f3a5d56d\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2021-08-13T04:14:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:52Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac2853f694e11\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c28d01d3-9c9a-4401-8e82-3ece329a089b\",\n                \"resourceVersion\": \"183\",\n                \"creationTimestamp\": \"2021-08-13T04:14:52Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:52Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:52Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz.169ac285c0287555\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"87c7111c-5e24-4965-bafc-392f088f0a71\",\n                \"resourceVersion\": \"224\",\n                \"creationTimestamp\": \"2021-08-13T04:14:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-d2ptz\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"622\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Startup probe failed: Get \\\"http://127.0.0.1:9876/healthz\\\": dial tcp 127.0.0.1:9876: connect: connection refused\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:04Z\",\n            \"count\": 6,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac28364633eca\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"25a52c05-f5d3-4ca8-b0e0-9c7f95a1cee5\",\n                \"resourceVersion\": \"166\",\n                \"creationTimestamp\": \"2021-08-13T04:14:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"663\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-mnwnd to ip-172-20-37-248.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:44Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac28433446e1b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f3f82e21-479c-4f51-9fc3-5f191e70fc7f\",\n                \"resourceVersion\": \"173\",\n                \"creationTimestamp\": \"2021-08-13T04:14:48Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"665\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/cilium/cilium:v1.10.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:48Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:48Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac285d377ab0a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"526b788c-eb46-46be-bca6-e85a6b9b8922\",\n                \"resourceVersion\": \"191\",\n                \"creationTimestamp\": \"2021-08-13T04:14:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"665\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/cilium/cilium:v1.10.3\\\" in 6.982648766s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:55Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac286329d08b5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fd3ee632-a52b-46c5-8f3d-43d4d7030dd1\",\n                \"resourceVersion\": \"198\",\n                \"creationTimestamp\": \"2021-08-13T04:14:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"665\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac2863cd9e2df\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e26ba4ea-4d71-4a7e-af4c-843a2d58f401\",\n                \"resourceVersion\": \"200\",\n                \"creationTimestamp\": \"2021-08-13T04:14:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"665\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:57Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac286763b9b71\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cf0a92e4-9a30-4c14-a041-24da3dd999b6\",\n                \"resourceVersion\": \"202\",\n                \"creationTimestamp\": \"2021-08-13T04:14:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"665\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/cilium/cilium:v1.10.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:58Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac28678e3572a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7900f1bb-40d5-4729-8e3c-84e8da48fda9\",\n                \"resourceVersion\": \"203\",\n                \"creationTimestamp\": \"2021-08-13T04:14:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"665\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:58Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac286800c794a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"20ce80dc-f0dc-4f94-9806-43f2f0d3f7b1\",\n                \"resourceVersion\": \"204\",\n                \"creationTimestamp\": \"2021-08-13T04:14:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"665\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:58Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd.169ac286ec802000\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3772b3ba-5e9a-4e46-b42e-93ca0908a0a0\",\n                \"resourceVersion\": \"231\",\n                \"creationTimestamp\": \"2021-08-13T04:14:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-mnwnd\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"665\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Startup probe failed: Get \\\"http://127.0.0.1:9876/healthz\\\": dial tcp 127.0.0.1:9876: connect: connection refused\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:59Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:09Z\",\n            \"count\": 6,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b-7lmbv.169ac27301247a91\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"25d8e5c3-eea1-409e-ad16-1d3e4b40f375\",\n                \"resourceVersion\": \"94\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-5c789c847b-7lmbv\",\n                \"uid\": \"434b1717-2bfd-464d-ab68-6872f6b69471\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"453\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:03Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b-7lmbv.169ac27b9dbe8aae\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f51c1a93-3a06-44cb-9a5e-3affaed72fdb\",\n                \"resourceVersion\": \"106\",\n                \"creationTimestamp\": \"2021-08-13T04:14:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-5c789c847b-7lmbv\",\n                \"uid\": \"434b1717-2bfd-464d-ab68-6872f6b69471\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"461\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-operator-5c789c847b-7lmbv to ip-172-20-39-193.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b-7lmbv.169ac27bd690d645\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"58b84286-64f3-44bc-bdec-3a0e43243959\",\n                \"resourceVersion\": \"111\",\n                \"creationTimestamp\": \"2021-08-13T04:14:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-5c789c847b-7lmbv\",\n                \"uid\": \"434b1717-2bfd-464d-ab68-6872f6b69471\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"539\",\n                \"fieldPath\": \"spec.containers{cilium-operator}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/cilium/operator:v1.10.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b-7lmbv.169ac27c6d9282be\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"349a285e-def8-4ded-ae46-254caf275f8d\",\n                \"resourceVersion\": \"117\",\n                \"creationTimestamp\": \"2021-08-13T04:14:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-5c789c847b-7lmbv\",\n                \"uid\": \"434b1717-2bfd-464d-ab68-6872f6b69471\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"539\",\n                \"fieldPath\": \"spec.containers{cilium-operator}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/cilium/operator:v1.10.3\\\" in 2.53345829s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:14Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b-7lmbv.169ac27c7b186f53\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"65211f17-bb74-4bcb-807d-43d003b44ec3\",\n                \"resourceVersion\": \"118\",\n                \"creationTimestamp\": \"2021-08-13T04:14:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-5c789c847b-7lmbv\",\n                \"uid\": \"434b1717-2bfd-464d-ab68-6872f6b69471\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"539\",\n                \"fieldPath\": \"spec.containers{cilium-operator}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-operator\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:15Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b-7lmbv.169ac27c81421bc0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7c190bc7-fe6e-4ce0-8ffe-5a14d005926d\",\n                \"resourceVersion\": \"119\",\n                \"creationTimestamp\": \"2021-08-13T04:14:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-5c789c847b-7lmbv\",\n                \"uid\": \"434b1717-2bfd-464d-ab68-6872f6b69471\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"539\",\n                \"fieldPath\": \"spec.containers{cilium-operator}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-operator\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:15Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b.169ac2730030fbec\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5ab13ca9-74d9-4675-a3e1-9e191a984cdd\",\n                \"resourceVersion\": \"62\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator-5c789c847b\",\n                \"uid\": \"5e8851cb-0406-417a-a07b-15f093c4023f\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"437\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-operator-5c789c847b-7lmbv\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator.169ac272fafe0c2f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"17f23e65-f4d8-42a6-abfa-be9f42c00479\",\n                \"resourceVersion\": \"57\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-operator\",\n                \"uid\": \"8f829a1e-1750-43a8-850e-67e2d59276f4\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"257\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set cilium-operator-5c789c847b to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac28332044616\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"758db85f-e27e-4cb0-91e1-55a4b7a07304\",\n                \"resourceVersion\": \"153\",\n                \"creationTimestamp\": \"2021-08-13T04:14:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"649\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-wqcjf to ip-172-20-60-176.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:43Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac283ec251843\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"49d35cd5-d04e-486f-92d4-c58645aa64db\",\n                \"resourceVersion\": \"172\",\n                \"creationTimestamp\": \"2021-08-13T04:14:47Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/cilium/cilium:v1.10.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:47Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:47Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac285882aafe4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4aa32d62-2289-4512-a6dc-b95961b28b89\",\n                \"resourceVersion\": \"186\",\n                \"creationTimestamp\": \"2021-08-13T04:14:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/cilium/cilium:v1.10.3\\\" in 6.912565285s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac285e72c8315\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f3398ebf-7aff-4fbc-a093-78aaff595fef\",\n                \"resourceVersion\": \"192\",\n                \"creationTimestamp\": \"2021-08-13T04:14:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:55Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac285ef9e457c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f3028de7-671d-4fdb-9d84-7b00ad456164\",\n                \"resourceVersion\": \"193\",\n                \"creationTimestamp\": \"2021-08-13T04:14:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:55Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac286072ea092\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"50567fcd-0eeb-4c6b-829d-83980d7867d7\",\n                \"resourceVersion\": \"194\",\n                \"creationTimestamp\": \"2021-08-13T04:14:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/cilium/cilium:v1.10.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac2860976a27a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1b98c3d1-8074-4f1a-aa1f-422cb7030c05\",\n                \"resourceVersion\": \"195\",\n                \"creationTimestamp\": \"2021-08-13T04:14:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac28610378940\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"47c93595-20d6-4da1-a901-b230407dddeb\",\n                \"resourceVersion\": \"197\",\n                \"creationTimestamp\": \"2021-08-13T04:14:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf.169ac2866c616173\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"43999936-cf04-468f-b04d-2666e8844fcb\",\n                \"resourceVersion\": \"229\",\n                \"creationTimestamp\": \"2021-08-13T04:14:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-wqcjf\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"652\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Startup probe failed: Get \\\"http://127.0.0.1:9876/healthz\\\": dial tcp 127.0.0.1:9876: connect: connection refused\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-60-176.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:57Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:07Z\",\n            \"count\": 6,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac282895aceb7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bb5687ac-92ea-46c4-9743-be52d7cec3a0\",\n                \"resourceVersion\": \"143\",\n                \"creationTimestamp\": \"2021-08-13T04:14:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"633\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/cilium-zp7bd to ip-172-20-59-139.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:41Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac28354f1194a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"71fed0c4-fa59-4d3a-bab6-76dda029d2d8\",\n                \"resourceVersion\": \"161\",\n                \"creationTimestamp\": \"2021-08-13T04:14:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"quay.io/cilium/cilium:v1.10.3\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:44Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac2850c1a8647\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e78d330f-9c75-4edc-a0f6-2d44ac9a17db\",\n                \"resourceVersion\": \"180\",\n                \"creationTimestamp\": \"2021-08-13T04:14:51Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"quay.io/cilium/cilium:v1.10.3\\\" in 7.367886975s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:51Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:51Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac2856b92d408\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ac7293e7-e241-4906-adce-6a3b54e8569b\",\n                \"resourceVersion\": \"184\",\n                \"creationTimestamp\": \"2021-08-13T04:14:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:53Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac285712eab32\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6203a0eb-37f4-42f8-82f3-6714b671ae42\",\n                \"resourceVersion\": \"185\",\n                \"creationTimestamp\": \"2021-08-13T04:14:53Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\",\n                \"fieldPath\": \"spec.initContainers{clean-cilium-state}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container clean-cilium-state\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:53Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:53Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac28591488d34\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"eeca9d26-5c9b-4255-9d77-42c307b95b29\",\n                \"resourceVersion\": \"187\",\n                \"creationTimestamp\": \"2021-08-13T04:14:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"quay.io/cilium/cilium:v1.10.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac28593aca8fe\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3aff58ec-574e-4b66-974f-f9ef13d585e4\",\n                \"resourceVersion\": \"188\",\n                \"creationTimestamp\": \"2021-08-13T04:14:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac2859a3a81b9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"723b8b27-f7da-4f2f-9185-0bfffbe5c996\",\n                \"resourceVersion\": \"189\",\n                \"creationTimestamp\": \"2021-08-13T04:14:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container cilium-agent\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd.169ac2860c786941\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c469a8a3-0dd1-489c-a83d-076a736bd797\",\n                \"resourceVersion\": \"228\",\n                \"creationTimestamp\": \"2021-08-13T04:14:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium-zp7bd\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"635\",\n                \"fieldPath\": \"spec.containers{cilium-agent}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Startup probe failed: Get \\\"http://127.0.0.1:9876/healthz\\\": dial tcp 127.0.0.1:9876: connect: connection refused\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:56Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:06Z\",\n            \"count\": 6,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.169ac272f958aa7b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"369e952f-e1e7-4e1b-984e-31080fdc9687\",\n                \"resourceVersion\": \"55\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"256\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-crxcg\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.169ac2822c53f7b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0f9e9c02-dddf-49f1-81fb-87ac8abfd139\",\n                \"resourceVersion\": \"132\",\n                \"creationTimestamp\": \"2021-08-13T04:14:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"447\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-d2ptz\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:39Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:39Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.169ac2828898d832\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6e847fec-9dd5-4515-83d9-c11886b7126a\",\n                \"resourceVersion\": \"141\",\n                \"creationTimestamp\": \"2021-08-13T04:14:41Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"621\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-zp7bd\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:41Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:41Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.169ac28330fef93f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0b3319d3-51d9-4b55-b052-88747e4c2e9f\",\n                \"resourceVersion\": \"150\",\n                \"creationTimestamp\": \"2021-08-13T04:14:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"638\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-wqcjf\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:43Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium.169ac283646209f6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"02c4a3cd-dc8f-4596-ba81-398cf969d40c\",\n                \"resourceVersion\": \"167\",\n                \"creationTimestamp\": \"2021-08-13T04:14:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"cilium\",\n                \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"653\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: cilium-mnwnd\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:44Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5867r.169ac2730367bd5b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d46bd831-4ee6-4c37-9f41-2b8b4568f8be\",\n                \"resourceVersion\": \"124\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5867r\",\n                \"uid\": \"006935a2-32cc-4ee0-bc43-7972a60ff2ae\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"455\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:21Z\",\n            \"count\": 6,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5867r.169ac2822c26dedc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"82cc92b3-a9f8-49b3-862a-dd79f3516278\",\n                \"resourceVersion\": \"131\",\n                \"creationTimestamp\": \"2021-08-13T04:14:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5867r\",\n                \"uid\": \"006935a2-32cc-4ee0-bc43-7972a60ff2ae\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"473\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:39Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:39Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5867r.169ac284b877e068\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"29f69f59-d8bb-45e4-9a75-5530cf5c5e60\",\n                \"resourceVersion\": \"177\",\n                \"creationTimestamp\": \"2021-08-13T04:14:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5867r\",\n                \"uid\": \"006935a2-32cc-4ee0-bc43-7972a60ff2ae\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"620\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:50Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:50Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5867r.169ac28744f36da5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"37b3c0b4-c03b-4e9b-bca8-06b04ca1fb63\",\n                \"resourceVersion\": \"214\",\n                \"creationTimestamp\": \"2021-08-13T04:15:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5867r\",\n                \"uid\": \"006935a2-32cc-4ee0-bc43-7972a60ff2ae\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"692\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-5867r to ip-172-20-59-139.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:01Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5867r.169ac28af4d5754a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e2917626-642a-49aa-962a-66ef3eee0b96\",\n                \"resourceVersion\": \"233\",\n                \"creationTimestamp\": \"2021-08-13T04:15:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5867r\",\n                \"uid\": \"006935a2-32cc-4ee0-bc43-7972a60ff2ae\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"782\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:17Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5867r.169ac28c78dae70d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4082a16c-195a-42bc-b67a-02c8f50d2316\",\n                \"resourceVersion\": \"241\",\n                \"creationTimestamp\": \"2021-08-13T04:15:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5867r\",\n                \"uid\": \"006935a2-32cc-4ee0-bc43-7972a60ff2ae\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"782\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 6.509901353s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:23Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5867r.169ac28c7fa3bc5a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"712a01f1-ddfc-4adb-8799-b3c8c1c5cbe9\",\n                \"resourceVersion\": \"242\",\n                \"creationTimestamp\": \"2021-08-13T04:15:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5867r\",\n                \"uid\": \"006935a2-32cc-4ee0-bc43-7972a60ff2ae\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"782\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:23Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-5867r.169ac28c8557ace3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b92d47ff-34d9-4545-a3d4-07e2d131f640\",\n                \"resourceVersion\": \"243\",\n                \"creationTimestamp\": \"2021-08-13T04:15:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-5867r\",\n                \"uid\": \"006935a2-32cc-4ee0-bc43-7972a60ff2ae\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"782\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-59-139.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:24Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-8447w.169ac28c0e3790d3\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"70ff807e-83c1-4d79-905f-21cd2993e33b\",\n                \"resourceVersion\": \"239\",\n                \"creationTimestamp\": \"2021-08-13T04:15:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-8447w\",\n                \"uid\": \"9c30f028-4b4d-4ced-8d52-b3123c100808\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"882\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-5dc785954d-8447w to ip-172-20-37-248.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:22Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-8447w.169ac28c68779468\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"72bff73e-7590-48cd-9295-20a43044ce7b\",\n                \"resourceVersion\": \"240\",\n                \"creationTimestamp\": \"2021-08-13T04:15:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-8447w\",\n                \"uid\": \"9c30f028-4b4d-4ced-8d52-b3123c100808\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"886\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:23Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-8447w.169ac28dc9d5337f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fbae118f-60be-4e4d-b74a-30bc296561c1\",\n                \"resourceVersion\": \"244\",\n                \"creationTimestamp\": \"2021-08-13T04:15:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-8447w\",\n                \"uid\": \"9c30f028-4b4d-4ced-8d52-b3123c100808\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"886\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\" in 5.928481233s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-8447w.169ac28dd1f5020d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ca1e02cd-a5dc-45cc-b9f1-b49ac49476eb\",\n                \"resourceVersion\": \"245\",\n                \"creationTimestamp\": \"2021-08-13T04:15:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-8447w\",\n                \"uid\": \"9c30f028-4b4d-4ced-8d52-b3123c100808\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"886\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d-8447w.169ac28dd86a10cd\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2ae95e24-93fd-4f55-b63c-cddcc6d9d1ac\",\n                \"resourceVersion\": \"246\",\n                \"creationTimestamp\": \"2021-08-13T04:15:29Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d-8447w\",\n                \"uid\": \"9c30f028-4b4d-4ced-8d52-b3123c100808\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"886\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-37-248.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.169ac273008170d6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"526828ad-653a-4484-981d-ca0ca76a0fee\",\n                \"resourceVersion\": \"65\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"1aa546ea-b8b1-4d18-81da-b398e6b7ff13\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"440\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-5867r\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d.169ac28c0cc12fb1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"41b58a6b-656c-4b71-9fcb-cc6196429d72\",\n                \"resourceVersion\": \"238\",\n                \"creationTimestamp\": \"2021-08-13T04:15:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-5dc785954d\",\n                \"uid\": \"1aa546ea-b8b1-4d18-81da-b398e6b7ff13\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"880\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-5dc785954d-8447w\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:22Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac272fff3fc14\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1c98756f-4550-47f9-9406-7454b4cdc127\",\n                \"resourceVersion\": \"123\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\",\n                \"uid\": \"4bdd73eb-427e-4af6-8b39-2183f881cc52\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"452\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:21Z\",\n            \"count\": 6,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac2822ada61ed\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d6972118-d7c1-461c-be4f-4c4fb993f663\",\n                \"resourceVersion\": \"129\",\n                \"creationTimestamp\": \"2021-08-13T04:14:39Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\",\n                \"uid\": \"4bdd73eb-427e-4af6-8b39-2183f881cc52\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"458\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:39Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:39Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac284b3968a77\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e2dc203c-4a9a-42e1-a313-d9185b1371fa\",\n                \"resourceVersion\": \"176\",\n                \"creationTimestamp\": \"2021-08-13T04:14:50Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\",\n                \"uid\": \"4bdd73eb-427e-4af6-8b39-2183f881cc52\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"617\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:50Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:50Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac287089f2edb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7ae7e0bc-705f-4de5-9869-358e00a11329\",\n                \"resourceVersion\": \"211\",\n                \"creationTimestamp\": \"2021-08-13T04:15:00Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\",\n                \"uid\": \"4bdd73eb-427e-4af6-8b39-2183f881cc52\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"691\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-autoscaler-84d4cfd89c-njhtm to ip-172-20-46-56.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:00Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac287e7be2f2e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d56ad72c-c10e-44c1-b7b5-d582a5f20e49\",\n                \"resourceVersion\": \"222\",\n                \"creationTimestamp\": \"2021-08-13T04:15:04Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\"\n            },\n            \"reason\": \"TaintManagerEviction\",\n            \"message\": \"Cancelling deletion of Pod kube-system/coredns-autoscaler-84d4cfd89c-njhtm\",\n            \"source\": {\n                \"component\": \"taint-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:04Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:04Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac28a9d53e2e2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e1890611-ff44-4cbf-8dba-ad1c07149aa9\",\n                \"resourceVersion\": \"232\",\n                \"creationTimestamp\": \"2021-08-13T04:15:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\",\n                \"uid\": \"4bdd73eb-427e-4af6-8b39-2183f881cc52\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"770\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:15Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac28bf0da91af\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5b9236a4-3778-4243-959c-3630eb9d7830\",\n                \"resourceVersion\": \"234\",\n                \"creationTimestamp\": \"2021-08-13T04:15:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\",\n                \"uid\": \"4bdd73eb-427e-4af6-8b39-2183f881cc52\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"770\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\" in 5.696288544s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:21Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac28bf73b1356\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4f95d4f0-e193-4c93-ab6f-7157444a4dcc\",\n                \"resourceVersion\": \"235\",\n                \"creationTimestamp\": \"2021-08-13T04:15:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\",\n                \"uid\": \"4bdd73eb-427e-4af6-8b39-2183f881cc52\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"770\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:21Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm.169ac28bfd13b62e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1c5503d1-1a6c-4136-9e6a-a5c92902034a\",\n                \"resourceVersion\": \"236\",\n                \"creationTimestamp\": \"2021-08-13T04:15:21Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c-njhtm\",\n                \"uid\": \"4bdd73eb-427e-4af6-8b39-2183f881cc52\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"770\",\n                \"fieldPath\": \"spec.containers{autoscaler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container autoscaler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-46-56.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:21Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:21Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c.169ac27300357bbc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0759eb63-4cad-4e18-8d7e-242da68584b9\",\n                \"resourceVersion\": \"64\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"uid\": \"d8cca4bb-1fc5-438a-a45d-8d6da9c5719a\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"438\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-autoscaler-84d4cfd89c-njhtm\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler.169ac272fdc85248\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8150cf01-9aad-404a-8c94-db3e020f25e5\",\n                \"resourceVersion\": \"60\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-autoscaler\",\n                \"uid\": \"1ea8163e-9265-493e-8130-a50174024a0b\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"229\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-autoscaler-84d4cfd89c to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.169ac272fc9cb320\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1eb988b5-edbc-4be0-8946-afb16509393a\",\n                \"resourceVersion\": \"59\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"53644505-28a0-48bc-aed7-9fd9fefe1607\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"222\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.169ac28c0c427c8c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"622f27e0-dbe2-4a28-829d-2c52c26e5e0d\",\n                \"resourceVersion\": \"237\",\n                \"creationTimestamp\": \"2021-08-13T04:15:22Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"53644505-28a0-48bc-aed7-9fd9fefe1607\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"879\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-5dc785954d to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:15:22Z\",\n            \"lastTimestamp\": \"2021-08-13T04:15:22Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-9df689cc8-vmhq8.169ac2730215c43d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e3f858d8-c5ec-4853-bc2d-5f84e1877da3\",\n                \"resourceVersion\": \"95\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-9df689cc8-vmhq8\",\n                \"uid\": \"cc0cfbf0-1789-4437-910b-ce25028ec88e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"454\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:03Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-9df689cc8-vmhq8.169ac27b9dba2933\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b56b9ea2-3e7c-4550-9445-9a7e4d19bc3c\",\n                \"resourceVersion\": \"105\",\n                \"creationTimestamp\": \"2021-08-13T04:14:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-9df689cc8-vmhq8\",\n                \"uid\": \"cc0cfbf0-1789-4437-910b-ce25028ec88e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"468\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/dns-controller-9df689cc8-vmhq8 to ip-172-20-39-193.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-9df689cc8-vmhq8.169ac27bd985adad\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d0b293c8-ac02-4ac0-bdc9-afcd8d18a3c5\",\n                \"resourceVersion\": \"112\",\n                \"creationTimestamp\": \"2021-08-13T04:14:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-9df689cc8-vmhq8\",\n                \"uid\": \"cc0cfbf0-1789-4437-910b-ce25028ec88e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"540\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-9df689cc8-vmhq8.169ac27bdc00555a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f0dd5361-0708-474f-b629-c9a02f614dec\",\n                \"resourceVersion\": \"113\",\n                \"creationTimestamp\": \"2021-08-13T04:14:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-9df689cc8-vmhq8\",\n                \"uid\": \"cc0cfbf0-1789-4437-910b-ce25028ec88e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"540\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-9df689cc8-vmhq8.169ac27be0948f13\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f6240e81-37e7-4454-b574-8239338e19e6\",\n                \"resourceVersion\": \"114\",\n                \"creationTimestamp\": \"2021-08-13T04:14:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-9df689cc8-vmhq8\",\n                \"uid\": \"cc0cfbf0-1789-4437-910b-ce25028ec88e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"540\",\n                \"fieldPath\": \"spec.containers{dns-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container dns-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-9df689cc8.169ac273009168ea\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3def39a9-3cfc-46ad-bc15-dbd9f9792d26\",\n                \"resourceVersion\": \"68\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller-9df689cc8\",\n                \"uid\": \"4a040ba8-c570-432a-b345-6e7dc9efc54e\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"439\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: dns-controller-9df689cc8-vmhq8\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller.169ac272fc47a68f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"0e6aacf0-3741-4ff8-8dd0-4d679f3c6c91\",\n                \"resourceVersion\": \"58\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"dns-controller\",\n                \"uid\": \"a215ed33-127e-4307-932d-1e46c3624f67\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"238\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set dns-controller-9df689cc8 to 1\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-39-193.ca-central-1.compute.internal.169ac263c21a5f83\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4f6962cc-a1e9-4ebe-9aa4-1f370efd03cd\",\n                \"resourceVersion\": \"18\",\n                \"creationTimestamp\": \"2021-08-13T04:13:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"0f776a789478090eb40d3282101c14b4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:28Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-39-193.ca-central-1.compute.internal.169ac265e758f06a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"dcb0aa47-7f31-42d1-a77b-b4c801701c66\",\n                \"resourceVersion\": \"32\",\n                \"creationTimestamp\": \"2021-08-13T04:13:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"0f776a789478090eb40d3282101c14b4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\" in 9.214778479s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-39-193.ca-central-1.compute.internal.169ac2660b50adcb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b949e0c3-12be-431c-90e1-c45407f51d6a\",\n                \"resourceVersion\": \"34\",\n                \"creationTimestamp\": \"2021-08-13T04:13:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"0f776a789478090eb40d3282101c14b4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-events-ip-172-20-39-193.ca-central-1.compute.internal.169ac26612d44b74\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"27eece2b-369b-444e-8102-695ef2f18b7a\",\n                \"resourceVersion\": \"37\",\n                \"creationTimestamp\": \"2021-08-13T04:13:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-events-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"0f776a789478090eb40d3282101c14b4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-39-193.ca-central-1.compute.internal.169ac263c3507508\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f6ee739d-92ec-448e-9f60-ae65076e6e22\",\n                \"resourceVersion\": \"19\",\n                \"creationTimestamp\": \"2021-08-13T04:13:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"5e3c3854c112290943f4dd173f655bd4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:28Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:28Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-39-193.ca-central-1.compute.internal.169ac26609115d25\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4ae763ea-49fd-4908-b2db-37a7f271b1cc\",\n                \"resourceVersion\": \"33\",\n                \"creationTimestamp\": \"2021-08-13T04:13:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"5e3c3854c112290943f4dd173f655bd4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707\\\" in 9.760193343s\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-39-193.ca-central-1.compute.internal.169ac2660bd0ca09\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"66ada5a9-45f6-4926-b440-ab7adebf9f92\",\n                \"resourceVersion\": \"35\",\n                \"creationTimestamp\": \"2021-08-13T04:13:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"5e3c3854c112290943f4dd173f655bd4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-manager-main-ip-172-20-39-193.ca-central-1.compute.internal.169ac26612919211\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"de18550d-a7e6-4c9c-ae73-1adfdf83f311\",\n                \"resourceVersion\": \"36\",\n                \"creationTimestamp\": \"2021-08-13T04:13:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"etcd-manager-main-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"5e3c3854c112290943f4dd173f655bd4\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{etcd-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container etcd-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:38Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-4l9ck.169ac27b7e496835\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"18bdf453-2468-45fd-bc39-bd0e7c271b48\",\n                \"resourceVersion\": \"102\",\n                \"creationTimestamp\": \"2021-08-13T04:14:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-4l9ck\",\n                \"uid\": \"3fa693ac-43e2-4ed7-8c3b-dd89d7b7a286\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"534\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kops-controller-4l9ck to ip-172-20-39-193.ca-central-1.compute.internal\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:10Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-4l9ck.169ac27bafb72842\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7009ba55-a547-4e4d-9bb4-683d4d2b04c9\",\n                \"resourceVersion\": \"107\",\n                \"creationTimestamp\": \"2021-08-13T04:14:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-4l9ck\",\n                \"uid\": \"3fa693ac-43e2-4ed7-8c3b-dd89d7b7a286\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"535\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-4l9ck.169ac27bb38ef6a0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a498ef22-0e33-4bd7-83a7-433931b942d4\",\n                \"resourceVersion\": \"108\",\n                \"creationTimestamp\": \"2021-08-13T04:14:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-4l9ck\",\n                \"uid\": \"3fa693ac-43e2-4ed7-8c3b-dd89d7b7a286\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"535\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-4l9ck.169ac27bbb748540\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5e33cb5a-c8c9-4988-8140-34bb3b5a7e4c\",\n                \"resourceVersion\": \"109\",\n                \"creationTimestamp\": \"2021-08-13T04:14:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-4l9ck\",\n                \"uid\": \"3fa693ac-43e2-4ed7-8c3b-dd89d7b7a286\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"535\",\n                \"fieldPath\": \"spec.containers{kops-controller}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kops-controller\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller-leader.169ac27bf10b8c6e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"87820383-13aa-4fce-898d-ef38b4644f16\",\n                \"resourceVersion\": \"115\",\n                \"creationTimestamp\": \"2021-08-13T04:14:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ConfigMap\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller-leader\",\n                \"uid\": \"11f46e06-e248-426c-b823-521b48b041fb\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"550\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-39-193_0b822843-7615-4dc9-9bb6-bb2bfd41756a became leader\",\n            \"source\": {\n                \"component\": \"ip-172-20-39-193_0b822843-7615-4dc9-9bb6-bb2bfd41756a\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller.169ac27b7dc952ab\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e999129d-702b-4408-a593-a0c61d194116\",\n                \"resourceVersion\": \"101\",\n                \"creationTimestamp\": \"2021-08-13T04:14:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kops-controller\",\n                \"uid\": \"56e50c15-fe44-4abe-b913-1baecc1bc63c\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"434\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kops-controller-4l9ck\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:14:10Z\",\n            \"lastTimestamp\": \"2021-08-13T04:14:10Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal.169ac263d6e5a5b1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4f390430-bf85-435a-84d1-3797058f6de6\",\n                \"resourceVersion\": \"41\",\n                \"creationTimestamp\": \"2021-08-13T04:13:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"d7aa98e788b92a396c57ab1d0ac1ce33\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-apiserver-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:50Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal.169ac263dbf5ac7a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1651032f-a4cf-4c97-9d50-d8c270584105\",\n                \"resourceVersion\": \"42\",\n                \"creationTimestamp\": \"2021-08-13T04:13:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"d7aa98e788b92a396c57ab1d0ac1ce33\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:50Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal.169ac263e3b0b1ca\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"67527908-8f85-4ef4-aef8-54f4a4794505\",\n                \"resourceVersion\": \"43\",\n                \"creationTimestamp\": \"2021-08-13T04:13:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"d7aa98e788b92a396c57ab1d0ac1ce33\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-apiserver}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-apiserver\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:50Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal.169ac263e3c157de\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"11714a7d-d600-4bfb-bcd5-209d89d7ff04\",\n                \"resourceVersion\": \"23\",\n                \"creationTimestamp\": \"2021-08-13T04:13:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"d7aa98e788b92a396c57ab1d0ac1ce33\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal.169ac263e6de31e4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1e9220c8-615c-4e56-bd1e-13a684d1d868\",\n                \"resourceVersion\": \"24\",\n                \"creationTimestamp\": \"2021-08-13T04:13:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"d7aa98e788b92a396c57ab1d0ac1ce33\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal.169ac263f4c59334\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"48905cdd-55b2-454b-af2a-0e84e20f600d\",\n                \"resourceVersion\": \"29\",\n                \"creationTimestamp\": \"2021-08-13T04:13:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-apiserver-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"d7aa98e788b92a396c57ab1d0ac1ce33\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{healthcheck}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container healthcheck\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal.169ac263e91f4bc6\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"52316b00-2e7b-40f6-9432-8185b66749df\",\n                \"resourceVersion\": \"47\",\n                \"creationTimestamp\": \"2021-08-13T04:13:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"1fe136c6f8abfe06298aa8e437d1cbe8\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-controller-manager-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:19Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal.169ac263ee60a6a7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9b24fccc-41f5-4812-a4a7-2a0901e3bd1d\",\n                \"resourceVersion\": \"48\",\n                \"creationTimestamp\": \"2021-08-13T04:13:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"1fe136c6f8abfe06298aa8e437d1cbe8\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:19Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal.169ac263fb48ead5\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d913ec02-6743-45f3-aea0-dda2423ea681\",\n                \"resourceVersion\": \"49\",\n                \"creationTimestamp\": \"2021-08-13T04:13:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"1fe136c6f8abfe06298aa8e437d1cbe8\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-controller-manager\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:19Z\",\n            \"count\": 3,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal.169ac26b39b5ad67\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bf85fde7-fd09-4beb-9bdf-8da4f9f60e66\",\n                \"resourceVersion\": \"44\",\n                \"creationTimestamp\": \"2021-08-13T04:13:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"1fe136c6f8abfe06298aa8e437d1cbe8\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Liveness probe failed: Get \\\"https://127.0.0.1:10257/healthz\\\": read tcp 127.0.0.1:33530-\\u003e127.0.0.1:10257: read: connection reset by peer\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:01Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:01Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal.169ac26b4e610a5d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"95d95031-de01-431d-8325-54debf61b452\",\n                \"resourceVersion\": \"46\",\n                \"creationTimestamp\": \"2021-08-13T04:13:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"1fe136c6f8abfe06298aa8e437d1cbe8\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-controller-manager}\"\n            },\n            \"reason\": \"BackOff\",\n            \"message\": \"Back-off restarting failed container\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:01Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:07Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal.169ac272f225cf29\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2d7cf81b-7391-449b-a5fd-de1127109906\",\n                \"resourceVersion\": \"52\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"522907f0-ab95-44f1-8806-bb2a48505249\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"276\"\n            },\n            \"reason\": \"NodeNotReady\",\n            \"message\": \"Node is not ready\",\n            \"source\": {\n                \"component\": \"node-controller\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.169ac26fa9467472\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"92414c90-aea8-4478-a211-cf1a97743fab\",\n                \"resourceVersion\": \"50\",\n                \"creationTimestamp\": \"2021-08-13T04:13:20Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"2f90bca8-70cd-434b-b2c3-83cd39ca2aa3\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"277\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-39-193_68184d14-5961-4a5c-85af-4eb5b22dd95a became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:20Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:20Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns.169ac272f5954ae7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"afc9f942-7f60-41cf-8867-0702a8f8a940\",\n                \"resourceVersion\": \"54\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"PodDisruptionBudget\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-dns\",\n                \"uid\": \"a2fa2d16-ffd6-469e-8d28-711030a203db\",\n                \"apiVersion\": \"policy/v1\",\n                \"resourceVersion\": \"225\"\n            },\n            \"reason\": \"NoPods\",\n            \"message\": \"No matching pods found\",\n            \"source\": {\n                \"component\": \"controllermanager\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:34Z\",\n            \"count\": 2,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-39-193.ca-central-1.compute.internal.169ac263e7349474\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1dbbffc6-bef6-4911-9fbc-67f1ca52b967\",\n                \"resourceVersion\": \"25\",\n                \"creationTimestamp\": \"2021-08-13T04:13:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"f7b72ab0640b069f20f53b696192c6fd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-scheduler-amd64:v1.21.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-39-193.ca-central-1.compute.internal.169ac263ededfc0c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3203a0eb-dd9f-4d0c-ae5d-b408aabd7836\",\n                \"resourceVersion\": \"27\",\n                \"creationTimestamp\": \"2021-08-13T04:13:10Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"f7b72ab0640b069f20f53b696192c6fd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:29Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-ip-172-20-39-193.ca-central-1.compute.internal.169ac264039add8f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fd518c76-24d3-4d8d-a924-42a5071ebe2a\",\n                \"resourceVersion\": \"31\",\n                \"creationTimestamp\": \"2021-08-13T04:13:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler-ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"uid\": \"f7b72ab0640b069f20f53b696192c6fd\",\n                \"apiVersion\": \"v1\",\n                \"fieldPath\": \"spec.containers{kube-scheduler}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-scheduler\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"ip-172-20-39-193.ca-central-1.compute.internal\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:12:30Z\",\n            \"lastTimestamp\": \"2021-08-13T04:12:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.169ac26c78010f9b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f9ca0fb0-a9f7-4987-893e-80deaafbc58d\",\n                \"resourceVersion\": \"7\",\n                \"creationTimestamp\": \"2021-08-13T04:13:06Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"976891b7-58f8-4546-87c0-cc79de906c02\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"214\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"ip-172-20-39-193_988f513e-dac6-46ae-a47d-f1ee7314a90b became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-08-13T04:13:06Z\",\n            \"lastTimestamp\": \"2021-08-13T04:13:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12350\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12351\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3a27a660-a40a-45eb-ab8d-98a18fed419d\",\n                \"resourceVersion\": \"224\",\n                \"creationTimestamp\": \"2021-08-13T04:13:07Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"9153\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"kube-dns\\\",\\\"namespace\\\":\\\"kube-system\\\",\\\"resourceVersion\\\":\\\"0\\\"},\\\"spec\\\":{\\\"clusterIP\\\":\\\"100.64.0.10\\\",\\\"ports\\\":[{\\\"name\\\":\\\"dns\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}],\\\"selector\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}}}\\n\",\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"100.64.0.10\",\n                \"clusterIPs\": [\n                    \"100.64.0.10\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv4\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12351\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                \"resourceVersion\": \"953\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-13T04:13:10Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"networking.cilium.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"role.kubernetes.io/networking\": \"1\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"networking.cilium.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-app\\\":\\\"cilium\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"role.kubernetes.io/networking\\\":\\\"1\\\"},\\\"name\\\":\\\"cilium\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"cilium\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"cilium\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"nodeAffinity\\\":{\\\"requiredDuringSchedulingIgnoredDuringExecution\\\":{\\\"nodeSelectorTerms\\\":[{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"kubernetes.io/os\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"linux\\\"]}]}]}},\\\"podAntiAffinity\\\":{\\\"requiredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"cilium\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"--config-dir=/tmp/cilium/config-map\\\"],\\\"command\\\":[\\\"cilium-agent\\\"],\\\"env\\\":[{\\\"name\\\":\\\"K8S_NODE_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"spec.nodeName\\\"}}},{\\\"name\\\":\\\"CILIUM_K8S_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}},{\\\"name\\\":\\\"CILIUM_CLUSTERMESH_CONFIG\\\",\\\"value\\\":\\\"/var/lib/cilium/clustermesh/\\\"},{\\\"name\\\":\\\"CILIUM_CNI_CHAINING_MODE\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"cni-chaining-mode\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"CILIUM_CUSTOM_CNI_CONF\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"custom-cni-conf\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\\\"},{\\\"name\\\":\\\"KUBERNETES_SERVICE_PORT\\\",\\\"value\\\":\\\"443\\\"}],\\\"image\\\":\\\"quay.io/cilium/cilium:v1.10.3\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"lifecycle\\\":{\\\"postStart\\\":{\\\"exec\\\":{\\\"command\\\":[\\\"/cni-install.sh\\\",\\\"--cni-exclusive=true\\\"]}},\\\"preStop\\\":{\\\"exec\\\":{\\\"command\\\":[\\\"/cni-uninstall.sh\\\"]}}},\\\"livenessProbe\\\":{\\\"failureThreshold\\\":10,\\\"httpGet\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"httpHeaders\\\":[{\\\"name\\\":\\\"brief\\\",\\\"value\\\":\\\"true\\\"}],\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":9876,\\\"scheme\\\":\\\"HTTP\\\"},\\\"periodSeconds\\\":30,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"cilium-agent\\\",\\\"readinessProbe\\\":{\\\"failureThreshold\\\":3,\\\"httpGet\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"httpHeaders\\\":[{\\\"name\\\":\\\"brief\\\",\\\"value\\\":\\\"true\\\"}],\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":9876,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":5,\\\"periodSeconds\\\":30,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"25m\\\",\\\"memory\\\":\\\"128Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_ADMIN\\\",\\\"SYS_MODULE\\\"]},\\\"privileged\\\":true},\\\"startupProbe\\\":{\\\"failureThreshold\\\":105,\\\"httpGet\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"httpHeaders\\\":[{\\\"name\\\":\\\"brief\\\",\\\"value\\\":\\\"true\\\"}],\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":9876,\\\"scheme\\\":\\\"HTTP\\\"},\\\"periodSeconds\\\":2,\\\"successThreshold\\\":null},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/sys/fs/bpf\\\",\\\"name\\\":\\\"bpf-maps\\\"},{\\\"mountPath\\\":\\\"/var/run/cilium\\\",\\\"name\\\":\\\"cilium-run\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cni-path\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"etc-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cilium/clustermesh\\\",\\\"name\\\":\\\"clustermesh-secrets\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/tmp/cilium/config-map\\\",\\\"name\\\":\\\"cilium-config-path\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\"}]}],\\\"hostNetwork\\\":true,\\\"initContainers\\\":[{\\\"command\\\":[\\\"/init-container.sh\\\"],\\\"env\\\":[{\\\"name\\\":\\\"CILIUM_ALL_STATE\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"clean-cilium-state\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"CILIUM_BPF_STATE\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"clean-cilium-bpf-state\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"CILIUM_WAIT_BPF_MOUNT\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"wait-bpf-mount\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}}],\\\"image\\\":\\\"quay.io/cilium/cilium:v1.10.3\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"clean-cilium-state\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"100Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"100Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_ADMIN\\\"]},\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/sys/fs/bpf\\\",\\\"mountPropagation\\\":\\\"HostToContainer\\\",\\\"name\\\":\\\"bpf-maps\\\"},{\\\"mountPath\\\":\\\"/sys/fs/cgroup/unified\\\",\\\"mountPropagation\\\":\\\"HostToContainer\\\",\\\"name\\\":\\\"cilium-cgroup\\\"},{\\\"mountPath\\\":\\\"/var/run/cilium\\\",\\\"name\\\":\\\"cilium-run\\\"}]}],\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"restartPolicy\\\":\\\"Always\\\",\\\"serviceAccount\\\":\\\"cilium\\\",\\\"serviceAccountName\\\":\\\"cilium\\\",\\\"terminationGracePeriodSeconds\\\":1,\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/var/run/cilium\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cilium-run\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/sys/fs/bpf\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"bpf-maps\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/opt/cni/bin\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-path\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/sys/fs/cgroup/unified\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cilium-cgroup\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"etc-cni-netd\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"name\\\":\\\"clustermesh-secrets\\\",\\\"secret\\\":{\\\"defaultMode\\\":420,\\\"optional\\\":true,\\\"secretName\\\":\\\"cilium-clustermesh\\\"}},{\\\"configMap\\\":{\\\"name\\\":\\\"cilium-config\\\"},\\\"name\\\":\\\"cilium-config-path\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"cilium\",\n                        \"kubernetes.io/cluster-service\": \"true\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"cilium\",\n                            \"kubernetes.io/cluster-service\": \"true\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cilium-run\",\n                                \"hostPath\": {\n                                    \"path\": \"/var/run/cilium\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"hostPath\": {\n                                    \"path\": \"/sys/fs/bpf\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"hostPath\": {\n                                    \"path\": \"/opt/cni/bin\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"cilium-cgroup\",\n                                \"hostPath\": {\n                                    \"path\": \"/sys/fs/cgroup/unified\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"DirectoryOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"secret\": {\n                                    \"secretName\": \"cilium-clustermesh\",\n                                    \"defaultMode\": 420,\n                                    \"optional\": true\n                                }\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"configMap\": {\n                                    \"name\": \"cilium-config\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"initContainers\": [\n                            {\n                                \"name\": \"clean-cilium-state\",\n                                \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                                \"command\": [\n                                    \"/init-container.sh\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"CILIUM_ALL_STATE\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"clean-cilium-state\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_BPF_STATE\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"clean-cilium-bpf-state\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"wait-bpf-mount\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"100Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"100Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"bpf-maps\",\n                                        \"mountPath\": \"/sys/fs/bpf\",\n                                        \"mountPropagation\": \"HostToContainer\"\n                                    },\n                                    {\n                                        \"name\": \"cilium-cgroup\",\n                                        \"mountPath\": \"/sys/fs/cgroup/unified\",\n                                        \"mountPropagation\": \"HostToContainer\"\n                                    },\n                                    {\n                                        \"name\": \"cilium-run\",\n                                        \"mountPath\": \"/var/run/cilium\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_ADMIN\"\n                                        ]\n                                    },\n                                    \"privileged\": true\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"cilium-agent\",\n                                \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                                \"command\": [\n                                    \"cilium-agent\"\n                                ],\n                                \"args\": [\n                                    \"--config-dir=/tmp/cilium/config-map\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"K8S_NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                        \"value\": \"/var/lib/cilium/clustermesh/\"\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"cni-chaining-mode\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"custom-cni-conf\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                        \"value\": \"443\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"25m\",\n                                        \"memory\": \"128Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"bpf-maps\",\n                                        \"mountPath\": \"/sys/fs/bpf\"\n                                    },\n                                    {\n                                        \"name\": \"cilium-run\",\n                                        \"mountPath\": \"/var/run/cilium\"\n                                    },\n                                    {\n                                        \"name\": \"cni-path\",\n                                        \"mountPath\": \"/host/opt/cni/bin\"\n                                    },\n                                    {\n                                        \"name\": \"etc-cni-netd\",\n                                        \"mountPath\": \"/host/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"clustermesh-secrets\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                                    },\n                                    {\n                                        \"name\": \"cilium-config-path\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/tmp/cilium/config-map\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9876,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\",\n                                        \"httpHeaders\": [\n                                            {\n                                                \"name\": \"brief\",\n                                                \"value\": \"true\"\n                                            }\n                                        ]\n                                    },\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 30,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 10\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9876,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\",\n                                        \"httpHeaders\": [\n                                            {\n                                                \"name\": \"brief\",\n                                                \"value\": \"true\"\n                                            }\n                                        ]\n                                    },\n                                    \"initialDelaySeconds\": 5,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 30,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"startupProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9876,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\",\n                                        \"httpHeaders\": [\n                                            {\n                                                \"name\": \"brief\",\n                                                \"value\": \"true\"\n                                            }\n                                        ]\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 2,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 105\n                                },\n                                \"lifecycle\": {\n                                    \"postStart\": {\n                                        \"exec\": {\n                                            \"command\": [\n                                                \"/cni-install.sh\",\n                                                \"--cni-exclusive=true\"\n                                            ]\n                                        }\n                                    },\n                                    \"preStop\": {\n                                        \"exec\": {\n                                            \"command\": [\n                                                \"/cni-uninstall.sh\"\n                                            ]\n                                        }\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_ADMIN\",\n                                            \"SYS_MODULE\"\n                                        ]\n                                    },\n                                    \"privileged\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 1,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"cilium\",\n                        \"serviceAccount\": \"cilium\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"nodeAffinity\": {\n                                \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                                    \"nodeSelectorTerms\": [\n                                        {\n                                            \"matchExpressions\": [\n                                                {\n                                                    \"key\": \"kubernetes.io/os\",\n                                                    \"operator\": \"In\",\n                                                    \"values\": [\n                                                        \"linux\"\n                                                    ]\n                                                }\n                                            ]\n                                        }\n                                    ]\n                                }\n                            },\n                            \"podAntiAffinity\": {\n                                \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"labelSelector\": {\n                                            \"matchExpressions\": [\n                                                {\n                                                    \"key\": \"k8s-app\",\n                                                    \"operator\": \"In\",\n                                                    \"values\": [\n                                                        \"cilium\"\n                                                    ]\n                                                }\n                                            ]\n                                        },\n                                        \"topologyKey\": \"kubernetes.io/hostname\"\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 5,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 5,\n                \"numberReady\": 5,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 5,\n                \"numberAvailable\": 5\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kops-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"56e50c15-fe44-4abe-b913-1baecc1bc63c\",\n                \"resourceVersion\": \"546\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-13T04:13:10Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"kops-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                    \"k8s-app\": \"kops-controller\",\n                    \"version\": \"v1.22.0-alpha.2\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-alpha.2\\\"},\\\"name\\\":\\\"kops-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kops-controller\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"dns.alpha.kubernetes.io/internal\\\":\\\"kops-controller.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"kops-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kops-controller\\\",\\\"version\\\":\\\"v1.22.0-alpha.2\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/kops-controller\\\",\\\"--v=2\\\",\\\"--conf=/etc/kubernetes/kops-controller/config/config.yaml\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2\\\",\\\"name\\\":\\\"kops-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/config/\\\",\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/kops-controller/pki/\\\",\\\"name\\\":\\\"kops-controller-pki\\\"}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kops.k8s.io/kops-controller-pki\\\":\\\"\\\",\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"serviceAccount\\\":\\\"kops-controller\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"kops-controller\\\"},\\\"name\\\":\\\"kops-controller-config\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/kops-controller/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"kops-controller-pki\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"OnDelete\\\"}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kops-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"kops-controller.addons.k8s.io\",\n                            \"k8s-app\": \"kops-controller\",\n                            \"version\": \"v1.22.0-alpha.2\"\n                        },\n                        \"annotations\": {\n                            \"dns.alpha.kubernetes.io/internal\": \"kops-controller.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kops-controller-config\",\n                                \"configMap\": {\n                                    \"name\": \"kops-controller\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"kops-controller-pki\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/kubernetes/kops-controller/\",\n                                    \"type\": \"Directory\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kops-controller\",\n                                \"image\": \"k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2\",\n                                \"command\": [\n                                    \"/kops-controller\",\n                                    \"--v=2\",\n                                    \"--conf=/etc/kubernetes/kops-controller/config/config.yaml\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kops-controller-config\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/config/\"\n                                    },\n                                    {\n                                        \"name\": \"kops-controller-pki\",\n                                        \"mountPath\": \"/etc/kubernetes/kops-controller/pki/\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kops.k8s.io/kops-controller-pki\": \"\",\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"kops-controller\",\n                        \"serviceAccount\": \"kops-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"OnDelete\"\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 1,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 1,\n                \"numberReady\": 1,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 1,\n                \"numberAvailable\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12351\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8f829a1e-1750-43a8-850e-67e2d59276f4\",\n                \"resourceVersion\": \"562\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-13T04:13:10Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"networking.cilium.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"io.cilium/app\": \"operator\",\n                    \"name\": \"cilium-operator\",\n                    \"role.kubernetes.io/networking\": \"1\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"networking.cilium.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"io.cilium/app\\\":\\\"operator\\\",\\\"name\\\":\\\"cilium-operator\\\",\\\"role.kubernetes.io/networking\\\":\\\"1\\\"},\\\"name\\\":\\\"cilium-operator\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"io.cilium/app\\\":\\\"operator\\\",\\\"name\\\":\\\"cilium-operator\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":1,\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"io.cilium/app\\\":\\\"operator\\\",\\\"name\\\":\\\"cilium-operator\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"args\\\":[\\\"--config-dir=/tmp/cilium/config-map\\\",\\\"--debug=$(CILIUM_DEBUG)\\\",\\\"--eni-tags=KubernetesCluster=e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\\\"],\\\"command\\\":[\\\"cilium-operator\\\"],\\\"env\\\":[{\\\"name\\\":\\\"K8S_NODE_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"spec.nodeName\\\"}}},{\\\"name\\\":\\\"CILIUM_K8S_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}},{\\\"name\\\":\\\"CILIUM_DEBUG\\\",\\\"valueFrom\\\":{\\\"configMapKeyRef\\\":{\\\"key\\\":\\\"debug\\\",\\\"name\\\":\\\"cilium-config\\\",\\\"optional\\\":true}}},{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\\\"},{\\\"name\\\":\\\"KUBERNETES_SERVICE_PORT\\\",\\\"value\\\":\\\"443\\\"}],\\\"image\\\":\\\"quay.io/cilium/operator:v1.10.3\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"httpGet\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":9234,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"periodSeconds\\\":10,\\\"timeoutSeconds\\\":3},\\\"name\\\":\\\"cilium-operator\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"25m\\\",\\\"memory\\\":\\\"128Mi\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/cilium/config-map\\\",\\\"name\\\":\\\"cilium-config-path\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"restartPolicy\\\":\\\"Always\\\",\\\"serviceAccount\\\":\\\"cilium-operator\\\",\\\"serviceAccountName\\\":\\\"cilium-operator\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"name\\\":\\\"cilium-config\\\"},\\\"name\\\":\\\"cilium-config-path\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"io.cilium/app\": \"operator\",\n                        \"name\": \"cilium-operator\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"io.cilium/app\": \"operator\",\n                            \"name\": \"cilium-operator\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"configMap\": {\n                                    \"name\": \"cilium-config\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"cilium-operator\",\n                                \"image\": \"quay.io/cilium/operator:v1.10.3\",\n                                \"command\": [\n                                    \"cilium-operator\"\n                                ],\n                                \"args\": [\n                                    \"--config-dir=/tmp/cilium/config-map\",\n                                    \"--debug=$(CILIUM_DEBUG)\",\n                                    \"--eni-tags=KubernetesCluster=e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"K8S_NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_DEBUG\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"debug\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                        \"value\": \"443\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"25m\",\n                                        \"memory\": \"128Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cilium-config-path\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/tmp/cilium/config-map\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9234,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 3,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"cilium-operator\",\n                        \"serviceAccount\": \"cilium-operator\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": 1\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-13T04:13:34Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:34Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-13T04:14:15Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:34Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"cilium-operator-5c789c847b\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"53644505-28a0-48bc-aed7-9fd9fefe1607\",\n                \"resourceVersion\": \"934\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-08-13T04:13:07Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"CoreDNS\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"kube-dns\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"kubernetes.io/name\\\":\\\"CoreDNS\\\"},\\\"name\\\":\\\"coredns\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"strategy\\\":{\\\"rollingUpdate\\\":{\\\"maxSurge\\\":\\\"10%\\\",\\\"maxUnavailable\\\":1},\\\"type\\\":\\\"RollingUpdate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"k8s-app\\\":\\\"kube-dns\\\"}},\\\"spec\\\":{\\\"affinity\\\":{\\\"podAntiAffinity\\\":{\\\"preferredDuringSchedulingIgnoredDuringExecution\\\":[{\\\"podAffinityTerm\\\":{\\\"labelSelector\\\":{\\\"matchExpressions\\\":[{\\\"key\\\":\\\"k8s-app\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kube-dns\\\"]}]},\\\"topologyKey\\\":\\\"kubernetes.io/hostname\\\"},\\\"weight\\\":100}]}},\\\"containers\\\":[{\\\"args\\\":[\\\"-conf\\\",\\\"/etc/coredns/Corefile\\\"],\\\"image\\\":\\\"k8s.gcr.io/coredns/coredns:v1.8.4\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/health\\\",\\\"port\\\":8080,\\\"scheme\\\":\\\"HTTP\\\"},\\\"initialDelaySeconds\\\":60,\\\"successThreshold\\\":1,\\\"timeoutSeconds\\\":5},\\\"name\\\":\\\"coredns\\\",\\\"ports\\\":[{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns\\\",\\\"protocol\\\":\\\"UDP\\\"},{\\\"containerPort\\\":53,\\\"name\\\":\\\"dns-tcp\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":9153,\\\"name\\\":\\\"metrics\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8181,\\\"scheme\\\":\\\"HTTP\\\"}},\\\"resources\\\":{\\\"limits\\\":{\\\"memory\\\":\\\"170Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"70Mi\\\"}},\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_BIND_SERVICE\\\"],\\\"drop\\\":[\\\"all\\\"]},\\\"readOnlyRootFilesystem\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true}]}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"Corefile\\\",\\\"path\\\":\\\"Corefile\\\"}],\\\"name\\\":\\\"coredns\\\"},\\\"name\\\":\\\"config-volume\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"10%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 2,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-13T04:15:25Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:25Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-13T04:15:30Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:34Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-5dc785954d\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1ea8163e-9265-493e-8130-a50174024a0b\",\n                \"resourceVersion\": \"876\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-13T04:13:07Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"coredns.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"coredns.addons.k8s.io\",\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"kubernetes.io/cluster-service\": \"true\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"coredns.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"coredns.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"coredns-autoscaler\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\"},\\\"name\\\":\\\"coredns-autoscaler\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-app\\\":\\\"coredns-autoscaler\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/cluster-proportional-autoscaler\\\",\\\"--namespace=kube-system\\\",\\\"--configmap=coredns-autoscaler\\\",\\\"--target=Deployment/coredns\\\",\\\"--default-params={\\\\\\\"linear\\\\\\\":{\\\\\\\"coresPerReplica\\\\\\\":256,\\\\\\\"nodesPerReplica\\\\\\\":16,\\\\\\\"preventSinglePointFailure\\\\\\\":true}}\\\",\\\"--logtostderr=true\\\",\\\"--v=2\\\"],\\\"image\\\":\\\"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\\\",\\\"name\\\":\\\"autoscaler\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"10Mi\\\"}}}],\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccountName\\\":\\\"coredns-autoscaler\\\",\\\"tolerations\\\":[{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": \"25%\",\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-13T04:15:21Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:15:21Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-13T04:15:21Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:34Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-autoscaler-84d4cfd89c\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a215ed33-127e-4307-932d-1e46c3624f67\",\n                \"resourceVersion\": \"549\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-13T04:13:08Z\",\n                \"labels\": {\n                    \"addon.kops.k8s.io/name\": \"dns-controller.addons.k8s.io\",\n                    \"app.kubernetes.io/managed-by\": \"kops\",\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"version\": \"v1.22.0-alpha.2\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\",\n                    \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"creationTimestamp\\\":null,\\\"labels\\\":{\\\"addon.kops.k8s.io/name\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"app.kubernetes.io/managed-by\\\":\\\"kops\\\",\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-alpha.2\\\"},\\\"name\\\":\\\"dns-controller\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"replicas\\\":1,\\\"selector\\\":{\\\"matchLabels\\\":{\\\"k8s-app\\\":\\\"dns-controller\\\"}},\\\"strategy\\\":{\\\"type\\\":\\\"Recreate\\\"},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"k8s-addon\\\":\\\"dns-controller.addons.k8s.io\\\",\\\"k8s-app\\\":\\\"dns-controller\\\",\\\"version\\\":\\\"v1.22.0-alpha.2\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/dns-controller\\\",\\\"--watch-ingress=false\\\",\\\"--dns=aws-route53\\\",\\\"--zone=*/ZEMLNXIIWQ0RV\\\",\\\"--zone=*/*\\\",\\\"-v=2\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBERNETES_SERVICE_HOST\\\",\\\"value\\\":\\\"127.0.0.1\\\"}],\\\"image\\\":\\\"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2\\\",\\\"name\\\":\\\"dns-controller\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"runAsNonRoot\\\":true}}],\\\"dnsPolicy\\\":\\\"Default\\\",\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"node-role.kubernetes.io/master\\\":\\\"\\\"},\\\"priorityClassName\\\":\\\"system-cluster-critical\\\",\\\"serviceAccount\\\":\\\"dns-controller\\\",\\\"tolerations\\\":[{\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"version\": \"v1.22.0-alpha.2\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"Recreate\"\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 1,\n                \"updatedReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-13T04:14:12Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:14:12Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-08-13T04:14:12Z\",\n                        \"lastTransitionTime\": \"2021-08-13T04:13:34Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"dns-controller-9df689cc8\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12351\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5e8851cb-0406-417a-a07b-15f093c4023f\",\n                \"resourceVersion\": \"561\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\",\n                \"labels\": {\n                    \"io.cilium/app\": \"operator\",\n                    \"name\": \"cilium-operator\",\n                    \"pod-template-hash\": \"5c789c847b\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"cilium-operator\",\n                        \"uid\": \"8f829a1e-1750-43a8-850e-67e2d59276f4\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"io.cilium/app\": \"operator\",\n                        \"name\": \"cilium-operator\",\n                        \"pod-template-hash\": \"5c789c847b\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"io.cilium/app\": \"operator\",\n                            \"name\": \"cilium-operator\",\n                            \"pod-template-hash\": \"5c789c847b\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"configMap\": {\n                                    \"name\": \"cilium-config\",\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"cilium-operator\",\n                                \"image\": \"quay.io/cilium/operator:v1.10.3\",\n                                \"command\": [\n                                    \"cilium-operator\"\n                                ],\n                                \"args\": [\n                                    \"--config-dir=/tmp/cilium/config-map\",\n                                    \"--debug=$(CILIUM_DEBUG)\",\n                                    \"--eni-tags=KubernetesCluster=e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"K8S_NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"metadata.namespace\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"CILIUM_DEBUG\",\n                                        \"valueFrom\": {\n                                            \"configMapKeyRef\": {\n                                                \"name\": \"cilium-config\",\n                                                \"key\": \"debug\",\n                                                \"optional\": true\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                                    },\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                        \"value\": \"443\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"25m\",\n                                        \"memory\": \"128Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cilium-config-path\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/tmp/cilium/config-map\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/healthz\",\n                                        \"port\": 9234,\n                                        \"host\": \"127.0.0.1\",\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 3,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"cilium-operator\",\n                        \"serviceAccount\": \"cilium-operator\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-5dc785954d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1aa546ea-b8b1-4d18-81da-b398e6b7ff13\",\n                \"resourceVersion\": \"933\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"5dc785954d\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"53644505-28a0-48bc-aed7-9fd9fefe1607\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"5dc785954d\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"5dc785954d\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.4\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"affinity\": {\n                            \"podAntiAffinity\": {\n                                \"preferredDuringSchedulingIgnoredDuringExecution\": [\n                                    {\n                                        \"weight\": 100,\n                                        \"podAffinityTerm\": {\n                                            \"labelSelector\": {\n                                                \"matchExpressions\": [\n                                                    {\n                                                        \"key\": \"k8s-app\",\n                                                        \"operator\": \"In\",\n                                                        \"values\": [\n                                                            \"kube-dns\"\n                                                        ]\n                                                    }\n                                                ]\n                                            },\n                                            \"topologyKey\": \"kubernetes.io/hostname\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 2\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-autoscaler-84d4cfd89c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d8cca4bb-1fc5-438a-a45d-8d6da9c5719a\",\n                \"resourceVersion\": \"875\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\",\n                \"labels\": {\n                    \"k8s-app\": \"coredns-autoscaler\",\n                    \"pod-template-hash\": \"84d4cfd89c\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"2\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns-autoscaler\",\n                        \"uid\": \"1ea8163e-9265-493e-8130-a50174024a0b\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"coredns-autoscaler\",\n                        \"pod-template-hash\": \"84d4cfd89c\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"coredns-autoscaler\",\n                            \"pod-template-hash\": \"84d4cfd89c\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"autoscaler\",\n                                \"image\": \"k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4\",\n                                \"command\": [\n                                    \"/cluster-proportional-autoscaler\",\n                                    \"--namespace=kube-system\",\n                                    \"--configmap=coredns-autoscaler\",\n                                    \"--target=Deployment/coredns\",\n                                    \"--default-params={\\\"linear\\\":{\\\"coresPerReplica\\\":256,\\\"nodesPerReplica\\\":16,\\\"preventSinglePointFailure\\\":true}}\",\n                                    \"--logtostderr=true\",\n                                    \"--v=2\"\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"20m\",\n                                        \"memory\": \"10Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\"\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns-autoscaler\",\n                        \"serviceAccount\": \"coredns-autoscaler\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"dns-controller-9df689cc8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4a040ba8-c570-432a-b345-6e7dc9efc54e\",\n                \"resourceVersion\": \"548\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\",\n                \"labels\": {\n                    \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                    \"k8s-app\": \"dns-controller\",\n                    \"pod-template-hash\": \"9df689cc8\",\n                    \"version\": \"v1.22.0-alpha.2\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"1\",\n                    \"deployment.kubernetes.io/max-replicas\": \"1\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"dns-controller\",\n                        \"uid\": \"a215ed33-127e-4307-932d-1e46c3624f67\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 1,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"dns-controller\",\n                        \"pod-template-hash\": \"9df689cc8\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-addon\": \"dns-controller.addons.k8s.io\",\n                            \"k8s-app\": \"dns-controller\",\n                            \"pod-template-hash\": \"9df689cc8\",\n                            \"version\": \"v1.22.0-alpha.2\"\n                        },\n                        \"annotations\": {\n                            \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                        }\n                    },\n                    \"spec\": {\n                        \"containers\": [\n                            {\n                                \"name\": \"dns-controller\",\n                                \"image\": \"k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2\",\n                                \"command\": [\n                                    \"/dns-controller\",\n                                    \"--watch-ingress=false\",\n                                    \"--dns=aws-route53\",\n                                    \"--zone=*/ZEMLNXIIWQ0RV\",\n                                    \"--zone=*/*\",\n                                    \"-v=2\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                        \"value\": \"127.0.0.1\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"requests\": {\n                                        \"cpu\": \"50m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"runAsNonRoot\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"node-role.kubernetes.io/master\": \"\"\n                        },\n                        \"serviceAccountName\": \"dns-controller\",\n                        \"serviceAccount\": \"dns-controller\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 1,\n                \"fullyLabeledReplicas\": 1,\n                \"readyReplicas\": 1,\n                \"availableReplicas\": 1,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12351\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"cilium-crxcg\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"153c5cc5-1ee0-4582-811c-3f03e59a2319\",\n                \"resourceVersion\": \"952\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"557d99f659\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-cgroup\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/cgroup/unified\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-c2pd4\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-cgroup\",\n                                \"mountPath\": \"/sys/fs/cgroup/unified\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-c2pd4\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-c2pd4\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 2,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 105\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\",\n                                        \"--cni-exclusive=true\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-39-193.ca-central-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:13:43Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:15:34Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:15:34Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:13:34Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.39.193\",\n                \"podIP\": \"172.20.39.193\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.39.193\"\n                    }\n                ],\n                \"startTime\": \"2021-08-13T04:13:34Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-08-13T04:13:43Z\",\n                                \"finishedAt\": \"2021-08-13T04:13:43Z\",\n                                \"containerID\": \"docker://0ee90972574c169bdc361bc072b76fe3524f8d30f60acf97f56a85e7cea4e967\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                        \"containerID\": \"docker://0ee90972574c169bdc361bc072b76fe3524f8d30f60acf97f56a85e7cea4e967\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-13T04:14:53Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-08-13T04:13:43Z\",\n                                \"finishedAt\": \"2021-08-13T04:14:52Z\",\n                                \"containerID\": \"docker://8b0c25e5c43201447577e7bab94f9d76d42a575a2fb1d8105cf0920a9225c82d\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                        \"containerID\": \"docker://b8bd61861186fd90df82e65cec1a33f90d289c6f9180172e144f49c61bd8184f\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-d2ptz\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fbd4f950-6100-413a-8443-9001a7dbe943\",\n                \"resourceVersion\": \"812\",\n                \"creationTimestamp\": \"2021-08-13T04:14:39Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"557d99f659\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-cgroup\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/cgroup/unified\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-6zhxw\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-cgroup\",\n                                \"mountPath\": \"/sys/fs/cgroup/unified\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-6zhxw\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-6zhxw\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 2,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 105\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\",\n                                        \"--cni-exclusive=true\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-46-56.ca-central-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-46-56.ca-central-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:52Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:15:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:15:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:39Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.46.56\",\n                \"podIP\": \"172.20.46.56\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.46.56\"\n                    }\n                ],\n                \"startTime\": \"2021-08-13T04:14:40Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-08-13T04:14:51Z\",\n                                \"finishedAt\": \"2021-08-13T04:14:51Z\",\n                                \"containerID\": \"docker://4981eb56a8808af8ed0e480328d0740d183e81422aaf570e9d97ff637db363cb\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                        \"containerID\": \"docker://4981eb56a8808af8ed0e480328d0740d183e81422aaf570e9d97ff637db363cb\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-13T04:14:52Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                        \"containerID\": \"docker://b26bc9f167cb6e2baa9a54a99e57db3a05870799134cf95344c3e0a376fa5107\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-mnwnd\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"250f053f-8797-45ba-98d0-f6402d7c5525\",\n                \"resourceVersion\": \"840\",\n                \"creationTimestamp\": \"2021-08-13T04:14:44Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"557d99f659\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-cgroup\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/cgroup/unified\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-qwq57\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-cgroup\",\n                                \"mountPath\": \"/sys/fs/cgroup/unified\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-qwq57\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-qwq57\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 2,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 105\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\",\n                                        \"--cni-exclusive=true\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-37-248.ca-central-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-37-248.ca-central-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:58Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:15:12Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:15:12Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:44Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.37.248\",\n                \"podIP\": \"172.20.37.248\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.37.248\"\n                    }\n                ],\n                \"startTime\": \"2021-08-13T04:14:45Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-08-13T04:14:56Z\",\n                                \"finishedAt\": \"2021-08-13T04:14:56Z\",\n                                \"containerID\": \"docker://6886e4eb40c06f02060c8fcef8e28ee8c2234010fe2a278ce9a4e636362e829f\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                        \"containerID\": \"docker://6886e4eb40c06f02060c8fcef8e28ee8c2234010fe2a278ce9a4e636362e829f\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-13T04:14:58Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                        \"containerID\": \"docker://03fcda29a26d8715880bd86887a0cde9c11c3066842dfe0aaa0d1a25ef31584b\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-operator-5c789c847b-7lmbv\",\n                \"generateName\": \"cilium-operator-5c789c847b-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"434b1717-2bfd-464d-ab68-6872f6b69471\",\n                \"resourceVersion\": \"560\",\n                \"creationTimestamp\": \"2021-08-13T04:13:34Z\",\n                \"labels\": {\n                    \"io.cilium/app\": \"operator\",\n                    \"name\": \"cilium-operator\",\n                    \"pod-template-hash\": \"5c789c847b\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"cilium-operator-5c789c847b\",\n                        \"uid\": \"5e8851cb-0406-417a-a07b-15f093c4023f\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-kmspz\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-operator\",\n                        \"image\": \"quay.io/cilium/operator:v1.10.3\",\n                        \"command\": [\n                            \"cilium-operator\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\",\n                            \"--debug=$(CILIUM_DEBUG)\",\n                            \"--eni-tags=KubernetesCluster=e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_DEBUG\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"debug\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-kmspz\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9234,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 3,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"serviceAccountName\": \"cilium-operator\",\n                \"serviceAccount\": \"cilium-operator\",\n                \"nodeName\": \"ip-172-20-39-193.ca-central-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:11Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:11Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.39.193\",\n                \"podIP\": \"172.20.39.193\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.39.193\"\n                    }\n                ],\n                \"startTime\": \"2021-08-13T04:14:11Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-operator\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-13T04:14:15Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/cilium/operator:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/operator@sha256:5c64867fbf3e09c1f05a44c6b4954ca19563230e89ff29724c7845ca550be66e\",\n                        \"containerID\": \"docker://add185ea3b2db55104411a7f0821dae5450617575776072415ed69e616f4fab8\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-wqcjf\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"30446225-efaf-41d4-b71f-63bf2c659614\",\n                \"resourceVersion\": \"938\",\n                \"creationTimestamp\": \"2021-08-13T04:14:43Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"557d99f659\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-cgroup\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/cgroup/unified\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-5h597\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-cgroup\",\n                                \"mountPath\": \"/sys/fs/cgroup/unified\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-5h597\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-5h597\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 2,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 105\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\",\n                                        \"--cni-exclusive=true\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\",\n                                    \"SYS_MODULE\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 1,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"cilium\",\n                \"serviceAccount\": \"cilium\",\n                \"nodeName\": \"ip-172-20-60-176.ca-central-1.compute.internal\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"ip-172-20-60-176.ca-central-1.compute.internal\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    },\n                    \"podAntiAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": [\n                            {\n                                \"labelSelector\": {\n                                    \"matchExpressions\": [\n                                        {\n                                            \"key\": \"k8s-app\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"cilium\"\n                                            ]\n                                        }\n                                    ]\n                                },\n                                \"topologyKey\": \"kubernetes.io/hostname\"\n                            }\n                        ]\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:56Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:15:31Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:15:31Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-08-13T04:14:43Z\"\n                    }\n                ],\n                \"hostIP\": \"172.20.60.176\",\n                \"podIP\": \"172.20.60.176\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.20.60.176\"\n                    }\n                ],\n                \"startTime\": \"2021-08-13T04:14:44Z\",\n                \"initContainerStatuses\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"state\": {\n                            \"terminated\": {\n                                \"exitCode\": 0,\n                                \"reason\": \"Completed\",\n                                \"startedAt\": \"2021-08-13T04:14:55Z\",\n                                \"finishedAt\": \"2021-08-13T04:14:55Z\",\n                                \"containerID\": \"docker://d62e8efb83f497246cc89e50f7a028bca73084856b434f849753fe73b953f18d\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                        \"containerID\": \"docker://d62e8efb83f497246cc89e50f7a028bca73084856b434f849753fe73b953f18d\"\n                    }\n                ],\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-08-13T04:14:56Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"imageID\": \"docker-pullable://quay.io/cilium/cilium@sha256:8419531c5d3677158802882bdfe2297915c43f2ebe3649551aaac22de9f6d565\",\n                        \"containerID\": \"docker://e23f6b81706343aff2143604ddb7aa24ad705af4f822b3642c9e4ee30a0241b2\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"cilium-zp7bd\",\n                \"generateName\": \"cilium-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"fe77793f-1552-4a32-a12c-cf4df9568845\",\n                \"resourceVersion\": \"943\",\n                \"creationTimestamp\": \"2021-08-13T04:14:41Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"557d99f659\",\n                    \"k8s-app\": \"cilium\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"annotations\": {\n                    \"scheduler.alpha.kubernetes.io/critical-pod\": \"\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"cilium\",\n                        \"uid\": \"3e4ea738-81d7-4f22-8d95-a3162c07bdad\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cilium-run\",\n                        \"hostPath\": {\n                            \"path\": \"/var/run/cilium\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"bpf-maps\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/bpf\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cni-path\",\n                        \"hostPath\": {\n                            \"path\": \"/opt/cni/bin\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-cgroup\",\n                        \"hostPath\": {\n                            \"path\": \"/sys/fs/cgroup/unified\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-cni-netd\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"clustermesh-secrets\",\n                        \"secret\": {\n                            \"secretName\": \"cilium-clustermesh\",\n                            \"defaultMode\": 420,\n                            \"optional\": true\n                        }\n                    },\n                    {\n                        \"name\": \"cilium-config-path\",\n                        \"configMap\": {\n                            \"name\": \"cilium-config\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-jrxhj\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"initContainers\": [\n                    {\n                        \"name\": \"clean-cilium-state\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"/init-container.sh\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"CILIUM_ALL_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_BPF_STATE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"clean-cilium-bpf-state\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_WAIT_BPF_MOUNT\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"wait-bpf-mount\",\n                                        \"optional\": true\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"100Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-cgroup\",\n                                \"mountPath\": \"/sys/fs/cgroup/unified\",\n                                \"mountPropagation\": \"HostToContainer\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-jrxhj\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"cilium-agent\",\n                        \"image\": \"quay.io/cilium/cilium:v1.10.3\",\n                        \"command\": [\n                            \"cilium-agent\"\n                        ],\n                        \"args\": [\n                            \"--config-dir=/tmp/cilium/config-map\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"K8S_NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_K8S_NAMESPACE\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"metadata.namespace\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CLUSTERMESH_CONFIG\",\n                                \"value\": \"/var/lib/cilium/clustermesh/\"\n                            },\n                            {\n                                \"name\": \"CILIUM_CNI_CHAINING_MODE\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"cni-chaining-mode\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"CILIUM_CUSTOM_CNI_CONF\",\n                                \"valueFrom\": {\n                                    \"configMapKeyRef\": {\n                                        \"name\": \"cilium-config\",\n                                        \"key\": \"custom-cni-conf\",\n                                        \"optional\": true\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_HOST\",\n                                \"value\": \"api.internal.e2e-777ab8319d-1e1b5.test-cncf-aws.k8s.io\"\n                            },\n                            {\n                                \"name\": \"KUBERNETES_SERVICE_PORT\",\n                                \"value\": \"443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"25m\",\n                                \"memory\": \"128Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"bpf-maps\",\n                                \"mountPath\": \"/sys/fs/bpf\"\n                            },\n                            {\n                                \"name\": \"cilium-run\",\n                                \"mountPath\": \"/var/run/cilium\"\n                            },\n                            {\n                                \"name\": \"cni-path\",\n                                \"mountPath\": \"/host/opt/cni/bin\"\n                            },\n                            {\n                                \"name\": \"etc-cni-netd\",\n                                \"mountPath\": \"/host/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"clustermesh-secrets\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/lib/cilium/clustermesh\"\n                            },\n                            {\n                                \"name\": \"cilium-config-path\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/tmp/cilium/config-map\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-jrxhj\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 10\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"initialDelaySeconds\": 5,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 30,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 9876,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\",\n                                \"httpHeaders\": [\n                                    {\n                                        \"name\": \"brief\",\n                                        \"value\": \"true\"\n                                    }\n                                ]\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 2,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 105\n                        },\n                        \"lifecycle\": {\n                            \"postStart\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-install.sh\",\n                                        \"--cni-exclusive=true\"\n                                    ]\n                                }\n                            },\n                            \"preStop\": {\n                                \"exec\": {\n                                    \"command\": [\n                                        \"/cni-uninstall.sh\"\n                                    ]\n                                }\n                            }\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityCo