This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-17 07:14
Elapsed33m14s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0917 07:14:53.005823    4088 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0917 07:14:53.007308    4088 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.23.0-alpha.2+v1.23.0-alpha.1-153-gfa29c9a6b2/linux/amd64/kops
I0917 07:14:53.806504    4088 up.go:43] Cleaning up any leaked resources from previous cluster
I0917 07:14:53.806547    4088 dumplogs.go:38] /logs/artifacts/c5798c52-1786-11ec-a91f-4a1b528dc7f1/kops toolbox dump --name e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0917 07:14:53.825598    4107 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0917 07:14:53.826143    4107 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io" not found
W0917 07:14:54.341073    4088 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0917 07:14:54.341135    4088 down.go:48] /logs/artifacts/c5798c52-1786-11ec-a91f-4a1b528dc7f1/kops delete cluster --name e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --yes
I0917 07:14:54.358579    4117 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0917 07:14:54.358675    4117 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io" not found
I0917 07:14:54.849996    4088 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/17 07:14:54 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0917 07:14:54.857573    4088 http.go:37] curl https://ip.jsb.workers.dev
I0917 07:14:54.938443    4088 up.go:144] /logs/artifacts/c5798c52-1786-11ec-a91f-4a1b528dc7f1/kops create cluster --name e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.2 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210907 --channel=alpha --networking=flannel --container-runtime=containerd --node-size=t3.large --admin-access 35.184.107.109/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0917 07:14:54.958099    4127 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0917 07:14:54.958188    4127 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I0917 07:14:54.982069    4127 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0917 07:14:55.593591    4127 new_cluster.go:1052]  Cloud Provider ID = aws
... skipping 31 lines ...

I0917 07:15:20.535507    4088 up.go:181] /logs/artifacts/c5798c52-1786-11ec-a91f-4a1b528dc7f1/kops validate cluster --name e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0917 07:15:20.552368    4145 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0917 07:15:20.552474    4145 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io

W0917 07:15:21.798312    4145 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:15:31.844699    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:15:41.880772    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:15:51.913226    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:16:01.975103    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:16:12.012083    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:16:22.147529    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:16:32.207277    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:16:42.236535    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:16:52.271893    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:17:02.304986    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
W0917 07:17:12.337429    4145 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:17:22.401055    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:17:32.435667    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:17:42.466581    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
W0917 07:17:52.482317    4145 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:18:02.597583    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:18:12.662427    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:18:22.693818    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:18:32.723055    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:18:42.789561    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:18:52.823491    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:19:02.853349    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:19:12.883777    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:19:22.930829    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0917 07:19:32.987975    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
W0917 07:19:43.066744    4145 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp: lookup api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

NODE STATUS
... skipping 9 lines ...
Node	ip-172-20-53-192.eu-west-2.compute.internal	node "ip-172-20-53-192.eu-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-pz2f9		system-cluster-critical pod "coredns-5dc785954d-pz2f9" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-q52t9	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-q52t9" is pending
Pod	kube-system/ebs-csi-node-n9m9l			system-node-critical pod "ebs-csi-node-n9m9l" is pending
Pod	kube-system/kube-flannel-ds-68kmn		system-node-critical pod "kube-flannel-ds-68kmn" is pending

Validation Failed
W0917 07:19:55.766315    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 11 lines ...
Pod	kube-system/ebs-csi-node-gw6q2		system-node-critical pod "ebs-csi-node-gw6q2" is pending
Pod	kube-system/ebs-csi-node-n9m9l		system-node-critical pod "ebs-csi-node-n9m9l" is pending
Pod	kube-system/ebs-csi-node-v6z4p		system-node-critical pod "ebs-csi-node-v6z4p" is pending
Pod	kube-system/kube-flannel-ds-88p4n	system-node-critical pod "kube-flannel-ds-88p4n" is pending
Pod	kube-system/kube-flannel-ds-c47nd	system-node-critical pod "kube-flannel-ds-c47nd" is pending

Validation Failed
W0917 07:20:07.630953    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 10 lines ...
Node	ip-172-20-33-78.eu-west-2.compute.internal				node "ip-172-20-33-78.eu-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/ebs-csi-node-bk75j						system-node-critical pod "ebs-csi-node-bk75j" is pending
Pod	kube-system/ebs-csi-node-gw6q2						system-node-critical pod "ebs-csi-node-gw6q2" is pending
Pod	kube-system/ebs-csi-node-v6z4p						system-node-critical pod "ebs-csi-node-v6z4p" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-186.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-60-186.eu-west-2.compute.internal" is pending

Validation Failed
W0917 07:20:19.556273    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 21 lines ...
ip-172-20-60-186.eu-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-33-78.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-33-78.eu-west-2.compute.internal" is pending

Validation Failed
W0917 07:20:43.531884    4145 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.large	4	4	eu-west-2a

... skipping 236 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 353 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
... skipping 119 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 341 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:23:16.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7663" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:17.270: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 60 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:17.292: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 118 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:24.740: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 58 lines ...
• [SLOW TEST:18.207 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:34.109: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 117 lines ...
• [SLOW TEST:18.822 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support memory backed volumes of specified size
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:18.967 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:34.831: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 475 lines ...
• [SLOW TEST:19.946 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":1,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:35.918: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
Sep 17 07:23:18.235: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Sep 17 07:23:18.333: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 17 07:23:18.625: INFO: Waiting up to 5m0s for pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559" in namespace "security-context-7117" to be "Succeeded or Failed"
Sep 17 07:23:18.722: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Pending", Reason="", readiness=false. Elapsed: 96.828112ms
Sep 17 07:23:20.823: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198437925s
Sep 17 07:23:22.924: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298669876s
Sep 17 07:23:25.022: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396687948s
Sep 17 07:23:27.136: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510841305s
Sep 17 07:23:29.233: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Pending", Reason="", readiness=false. Elapsed: 10.607957324s
Sep 17 07:23:31.330: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Pending", Reason="", readiness=false. Elapsed: 12.705520507s
Sep 17 07:23:33.449: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Pending", Reason="", readiness=false. Elapsed: 14.824322163s
Sep 17 07:23:35.548: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.922700752s
STEP: Saw pod success
Sep 17 07:23:35.548: INFO: Pod "security-context-ce46352f-ee89-465b-be46-b51d9490c559" satisfied condition "Succeeded or Failed"
Sep 17 07:23:35.645: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod security-context-ce46352f-ee89-465b-be46-b51d9490c559 container test-container: <nil>
STEP: delete the pod
Sep 17 07:23:35.843: INFO: Waiting for pod security-context-ce46352f-ee89-465b-be46-b51d9490c559 to disappear
Sep 17 07:23:35.939: INFO: Pod security-context-ce46352f-ee89-465b-be46-b51d9490c559 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:20.239 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:36.246: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 76 lines ...
STEP: watching for Pod to be ready
Sep 17 07:23:16.673: INFO: observed Pod pod-test in namespace pods-1203 in phase Pending with labels: map[test-pod-static:true] & conditions []
Sep 17 07:23:16.673: INFO: observed Pod pod-test in namespace pods-1203 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:16 +0000 UTC  }]
Sep 17 07:23:16.673: INFO: observed Pod pod-test in namespace pods-1203 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:16 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:16 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:16 +0000 UTC  }]
Sep 17 07:23:31.384: INFO: Found Pod pod-test in namespace pods-1203 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 07:23:16 +0000 UTC  }]
STEP: patching the Pod with a new Label and updated data
Sep 17 07:23:31.580: INFO: observed event type ERROR
Sep 17 07:23:31.584: FAIL: failed to see MODIFIED event
Unexpected error:
    <*errors.errorString | 0xc00032ba00>: {
        s: "watch closed before UntilWithoutRetry timeout",
    }
    watch closed before UntilWithoutRetry timeout
occurred

... skipping 304 lines ...
• Failure [23.780 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should run through the lifecycle of Pods and PodStatus [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 17 07:23:31.584: failed to see MODIFIED event
  Unexpected error:
      <*errors.errorString | 0xc00032ba00>: {
          s: "watch closed before UntilWithoutRetry timeout",
      }
      watch closed before UntilWithoutRetry timeout
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:988
------------------------------
{"msg":"FAILED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":0,"skipped":3,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:39.500: INFO: Driver "local" does not provide raw block - skipping
... skipping 25 lines ...
Sep 17 07:23:16.838: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-31b07c9f-01fc-4a94-88e4-d09097c0221b
STEP: Creating a pod to test consume secrets
Sep 17 07:23:17.227: INFO: Waiting up to 5m0s for pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7" in namespace "secrets-9306" to be "Succeeded or Failed"
Sep 17 07:23:17.322: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 95.378434ms
Sep 17 07:23:19.451: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224261165s
Sep 17 07:23:21.547: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320181192s
Sep 17 07:23:23.644: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417008547s
Sep 17 07:23:25.741: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513637507s
Sep 17 07:23:27.836: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.609228615s
Sep 17 07:23:29.934: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.706882075s
Sep 17 07:23:32.030: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.802904868s
Sep 17 07:23:34.127: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.899782638s
Sep 17 07:23:36.223: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.9956544s
Sep 17 07:23:38.319: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.09162677s
Sep 17 07:23:40.416: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.188810893s
STEP: Saw pod success
Sep 17 07:23:40.416: INFO: Pod "pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7" satisfied condition "Succeeded or Failed"
Sep 17 07:23:40.512: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 07:23:40.742: INFO: Waiting for pod pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7 to disappear
Sep 17 07:23:40.838: INFO: Pod pod-secrets-3df8abfd-5171-489b-b657-5d4b5aa659e7 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 13 lines ...
Sep 17 07:23:34.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Sep 17 07:23:35.266: INFO: Waiting up to 5m0s for pod "client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c" in namespace "containers-9303" to be "Succeeded or Failed"
Sep 17 07:23:35.363: INFO: Pod "client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c": Phase="Pending", Reason="", readiness=false. Elapsed: 96.658368ms
Sep 17 07:23:37.460: INFO: Pod "client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194260483s
Sep 17 07:23:39.558: INFO: Pod "client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291783927s
Sep 17 07:23:41.656: INFO: Pod "client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.390306648s
STEP: Saw pod success
Sep 17 07:23:41.656: INFO: Pod "client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c" satisfied condition "Succeeded or Failed"
Sep 17 07:23:41.755: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c container agnhost-container: <nil>
STEP: delete the pod
Sep 17 07:23:41.965: INFO: Waiting for pod client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c to disappear
Sep 17 07:23:42.062: INFO: Pod client-containers-fe98af89-9e44-43d6-acf9-5e51c878e09c no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.580 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:42.310: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
Sep 17 07:23:29.656: INFO: PersistentVolumeClaim pvc-6zmpr found but phase is Pending instead of Bound.
Sep 17 07:23:31.753: INFO: PersistentVolumeClaim pvc-6zmpr found and phase=Bound (2.192708874s)
Sep 17 07:23:31.753: INFO: Waiting up to 3m0s for PersistentVolume local-62xgq to have phase Bound
Sep 17 07:23:31.849: INFO: PersistentVolume local-62xgq found and phase=Bound (96.121675ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-g4w9
STEP: Creating a pod to test subpath
Sep 17 07:23:32.139: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-g4w9" in namespace "provisioning-6509" to be "Succeeded or Failed"
Sep 17 07:23:32.237: INFO: Pod "pod-subpath-test-preprovisionedpv-g4w9": Phase="Pending", Reason="", readiness=false. Elapsed: 97.721415ms
Sep 17 07:23:34.335: INFO: Pod "pod-subpath-test-preprovisionedpv-g4w9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195239068s
Sep 17 07:23:36.431: INFO: Pod "pod-subpath-test-preprovisionedpv-g4w9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291872993s
Sep 17 07:23:38.529: INFO: Pod "pod-subpath-test-preprovisionedpv-g4w9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.389228004s
STEP: Saw pod success
Sep 17 07:23:38.529: INFO: Pod "pod-subpath-test-preprovisionedpv-g4w9" satisfied condition "Succeeded or Failed"
Sep 17 07:23:38.628: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-g4w9 container test-container-volume-preprovisionedpv-g4w9: <nil>
STEP: delete the pod
Sep 17 07:23:38.836: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-g4w9 to disappear
Sep 17 07:23:38.932: INFO: Pod pod-subpath-test-preprovisionedpv-g4w9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-g4w9
Sep 17 07:23:38.932: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-g4w9" in namespace "provisioning-6509"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:42.455: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 64 lines ...
Sep 17 07:23:29.894: INFO: PersistentVolumeClaim pvc-mxzn2 found but phase is Pending instead of Bound.
Sep 17 07:23:31.992: INFO: PersistentVolumeClaim pvc-mxzn2 found and phase=Bound (2.206712406s)
Sep 17 07:23:31.992: INFO: Waiting up to 3m0s for PersistentVolume local-lb28f to have phase Bound
Sep 17 07:23:32.093: INFO: PersistentVolume local-lb28f found and phase=Bound (100.498931ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-q2qh
STEP: Creating a pod to test subpath
Sep 17 07:23:32.395: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q2qh" in namespace "provisioning-2693" to be "Succeeded or Failed"
Sep 17 07:23:32.492: INFO: Pod "pod-subpath-test-preprovisionedpv-q2qh": Phase="Pending", Reason="", readiness=false. Elapsed: 97.521007ms
Sep 17 07:23:34.590: INFO: Pod "pod-subpath-test-preprovisionedpv-q2qh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195374627s
Sep 17 07:23:36.689: INFO: Pod "pod-subpath-test-preprovisionedpv-q2qh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294044906s
Sep 17 07:23:38.788: INFO: Pod "pod-subpath-test-preprovisionedpv-q2qh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392896958s
Sep 17 07:23:40.886: INFO: Pod "pod-subpath-test-preprovisionedpv-q2qh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.491286962s
STEP: Saw pod success
Sep 17 07:23:40.886: INFO: Pod "pod-subpath-test-preprovisionedpv-q2qh" satisfied condition "Succeeded or Failed"
Sep 17 07:23:40.992: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-q2qh container test-container-volume-preprovisionedpv-q2qh: <nil>
STEP: delete the pod
Sep 17 07:23:41.206: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q2qh to disappear
Sep 17 07:23:41.308: INFO: Pod pod-subpath-test-preprovisionedpv-q2qh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-q2qh
Sep 17 07:23:41.308: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q2qh" in namespace "provisioning-2693"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:42.783: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 103 lines ...
• [SLOW TEST:18.285 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Deployment Status endpoints [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:43.067: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-612afa94-8ee4-4e25-8a1c-bf786d05ce79
STEP: Creating a pod to test consume configMaps
Sep 17 07:23:36.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002" in namespace "configmap-1794" to be "Succeeded or Failed"
Sep 17 07:23:37.072: INFO: Pod "pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002": Phase="Pending", Reason="", readiness=false. Elapsed: 99.674647ms
Sep 17 07:23:39.169: INFO: Pod "pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19689398s
Sep 17 07:23:41.274: INFO: Pod "pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302525922s
Sep 17 07:23:43.373: INFO: Pod "pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.401328168s
STEP: Saw pod success
Sep 17 07:23:43.373: INFO: Pod "pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002" satisfied condition "Succeeded or Failed"
Sep 17 07:23:43.470: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 07:23:43.688: INFO: Waiting for pod pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002 to disappear
Sep 17 07:23:43.791: INFO: Pod pod-configmaps-48fd34fa-31fe-4228-93a3-05acb4a37002 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.720 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":2,"skipped":23,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:44.104: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Sep 17 07:23:32.618: INFO: PersistentVolumeClaim pvc-qtm8n found and phase=Bound (98.693717ms)
Sep 17 07:23:32.618: INFO: Waiting up to 3m0s for PersistentVolume nfs-qg62j to have phase Bound
Sep 17 07:23:32.720: INFO: PersistentVolume nfs-qg62j found and phase=Bound (102.211274ms)
STEP: Checking pod has write access to PersistentVolume
Sep 17 07:23:32.924: INFO: Creating nfs test pod
Sep 17 07:23:33.023: INFO: Pod should terminate with exitcode 0 (success)
Sep 17 07:23:33.023: INFO: Waiting up to 5m0s for pod "pvc-tester-4b5kb" in namespace "pv-6016" to be "Succeeded or Failed"
Sep 17 07:23:33.120: INFO: Pod "pvc-tester-4b5kb": Phase="Pending", Reason="", readiness=false. Elapsed: 97.637705ms
Sep 17 07:23:35.219: INFO: Pod "pvc-tester-4b5kb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196813827s
Sep 17 07:23:37.318: INFO: Pod "pvc-tester-4b5kb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295224166s
Sep 17 07:23:39.417: INFO: Pod "pvc-tester-4b5kb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.394354966s
STEP: Saw pod success
Sep 17 07:23:39.417: INFO: Pod "pvc-tester-4b5kb" satisfied condition "Succeeded or Failed"
Sep 17 07:23:39.417: INFO: Pod pvc-tester-4b5kb succeeded 
Sep 17 07:23:39.417: INFO: Deleting pod "pvc-tester-4b5kb" in namespace "pv-6016"
Sep 17 07:23:39.530: INFO: Wait up to 5m0s for pod "pvc-tester-4b5kb" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep 17 07:23:39.628: INFO: Deleting PVC pvc-qtm8n to trigger reclamation of PV nfs-qg62j
Sep 17 07:23:39.628: INFO: Deleting PersistentVolumeClaim "pvc-qtm8n"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:23:41.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-ed8eafa5-4c2b-49f8-8444-c7d0f9e212d8
STEP: Creating a pod to test consume configMaps
Sep 17 07:23:41.852: INFO: Waiting up to 5m0s for pod "pod-configmaps-090507a1-059d-4a34-9c63-4c7c7f5e7704" in namespace "configmap-7481" to be "Succeeded or Failed"
Sep 17 07:23:41.949: INFO: Pod "pod-configmaps-090507a1-059d-4a34-9c63-4c7c7f5e7704": Phase="Pending", Reason="", readiness=false. Elapsed: 97.376599ms
Sep 17 07:23:44.047: INFO: Pod "pod-configmaps-090507a1-059d-4a34-9c63-4c7c7f5e7704": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194613028s
Sep 17 07:23:46.143: INFO: Pod "pod-configmaps-090507a1-059d-4a34-9c63-4c7c7f5e7704": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.291266197s
STEP: Saw pod success
Sep 17 07:23:46.143: INFO: Pod "pod-configmaps-090507a1-059d-4a34-9c63-4c7c7f5e7704" satisfied condition "Succeeded or Failed"
Sep 17 07:23:46.240: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-configmaps-090507a1-059d-4a34-9c63-4c7c7f5e7704 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 07:23:46.442: INFO: Waiting for pod pod-configmaps-090507a1-059d-4a34-9c63-4c7c7f5e7704 to disappear
Sep 17 07:23:46.539: INFO: Pod pod-configmaps-090507a1-059d-4a34-9c63-4c7c7f5e7704 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.584 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:46.765: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 07:23:43.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af47407c-d569-464b-89dd-85990a7df63b" in namespace "downward-api-1631" to be "Succeeded or Failed"
Sep 17 07:23:43.528: INFO: Pod "downwardapi-volume-af47407c-d569-464b-89dd-85990a7df63b": Phase="Pending", Reason="", readiness=false. Elapsed: 99.425355ms
Sep 17 07:23:45.633: INFO: Pod "downwardapi-volume-af47407c-d569-464b-89dd-85990a7df63b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204750782s
Sep 17 07:23:47.731: INFO: Pod "downwardapi-volume-af47407c-d569-464b-89dd-85990a7df63b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.30238194s
STEP: Saw pod success
Sep 17 07:23:47.731: INFO: Pod "downwardapi-volume-af47407c-d569-464b-89dd-85990a7df63b" satisfied condition "Succeeded or Failed"
Sep 17 07:23:47.829: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod downwardapi-volume-af47407c-d569-464b-89dd-85990a7df63b container client-container: <nil>
STEP: delete the pod
Sep 17 07:23:48.031: INFO: Waiting for pod downwardapi-volume-af47407c-d569-464b-89dd-85990a7df63b to disappear
Sep 17 07:23:48.128: INFO: Pod downwardapi-volume-af47407c-d569-464b-89dd-85990a7df63b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.498 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:50.593: INFO: Only supported for providers [azure] (not aws)
... skipping 48 lines ...
• [SLOW TEST:10.568 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 24 lines ...
Sep 17 07:23:45.246: INFO: PersistentVolumeClaim pvc-hhql5 found but phase is Pending instead of Bound.
Sep 17 07:23:47.344: INFO: PersistentVolumeClaim pvc-hhql5 found and phase=Bound (14.804846663s)
Sep 17 07:23:47.344: INFO: Waiting up to 3m0s for PersistentVolume local-6mfq6 to have phase Bound
Sep 17 07:23:47.445: INFO: PersistentVolume local-6mfq6 found and phase=Bound (100.478563ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-8vnm
STEP: Creating a pod to test exec-volume-test
Sep 17 07:23:47.741: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-8vnm" in namespace "volume-7645" to be "Succeeded or Failed"
Sep 17 07:23:47.841: INFO: Pod "exec-volume-test-preprovisionedpv-8vnm": Phase="Pending", Reason="", readiness=false. Elapsed: 99.733135ms
Sep 17 07:23:49.939: INFO: Pod "exec-volume-test-preprovisionedpv-8vnm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197653179s
Sep 17 07:23:52.036: INFO: Pod "exec-volume-test-preprovisionedpv-8vnm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29486347s
Sep 17 07:23:54.133: INFO: Pod "exec-volume-test-preprovisionedpv-8vnm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.392233573s
STEP: Saw pod success
Sep 17 07:23:54.133: INFO: Pod "exec-volume-test-preprovisionedpv-8vnm" satisfied condition "Succeeded or Failed"
Sep 17 07:23:54.230: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-8vnm container exec-container-preprovisionedpv-8vnm: <nil>
STEP: delete the pod
Sep 17 07:23:54.429: INFO: Waiting for pod exec-volume-test-preprovisionedpv-8vnm to disappear
Sep 17 07:23:54.526: INFO: Pod exec-volume-test-preprovisionedpv-8vnm no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-8vnm
Sep 17 07:23:54.526: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8vnm" in namespace "volume-7645"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-storage] Mounted volume expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:23:18.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename mounted-volume-expand
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
• [SLOW TEST:37.764 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:55.971: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
Sep 17 07:23:43.895: INFO: PersistentVolumeClaim pvc-54m4k found but phase is Pending instead of Bound.
Sep 17 07:23:45.992: INFO: PersistentVolumeClaim pvc-54m4k found and phase=Bound (12.705016932s)
Sep 17 07:23:45.992: INFO: Waiting up to 3m0s for PersistentVolume local-22hsv to have phase Bound
Sep 17 07:23:46.095: INFO: PersistentVolume local-22hsv found and phase=Bound (103.107187ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-78rb
STEP: Creating a pod to test exec-volume-test
Sep 17 07:23:46.392: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-78rb" in namespace "volume-305" to be "Succeeded or Failed"
Sep 17 07:23:46.495: INFO: Pod "exec-volume-test-preprovisionedpv-78rb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.882912ms
Sep 17 07:23:48.592: INFO: Pod "exec-volume-test-preprovisionedpv-78rb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200576712s
Sep 17 07:23:50.689: INFO: Pod "exec-volume-test-preprovisionedpv-78rb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297513985s
Sep 17 07:23:52.805: INFO: Pod "exec-volume-test-preprovisionedpv-78rb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.412860457s
STEP: Saw pod success
Sep 17 07:23:52.805: INFO: Pod "exec-volume-test-preprovisionedpv-78rb" satisfied condition "Succeeded or Failed"
Sep 17 07:23:52.901: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-78rb container exec-container-preprovisionedpv-78rb: <nil>
STEP: delete the pod
Sep 17 07:23:53.099: INFO: Waiting for pod exec-volume-test-preprovisionedpv-78rb to disappear
Sep 17 07:23:53.195: INFO: Pod exec-volume-test-preprovisionedpv-78rb no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-78rb
Sep 17 07:23:53.195: INFO: Deleting pod "exec-volume-test-preprovisionedpv-78rb" in namespace "volume-305"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:56.100: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 25 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:23:56.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:56.411: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 59 lines ...
• [SLOW TEST:12.376 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:56.520: INFO: Only supported for providers [gce gke] (not aws)
... skipping 94 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 17 07:23:51.207: INFO: Waiting up to 5m0s for pod "pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1" in namespace "emptydir-3018" to be "Succeeded or Failed"
Sep 17 07:23:51.304: INFO: Pod "pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1": Phase="Pending", Reason="", readiness=false. Elapsed: 96.970874ms
Sep 17 07:23:53.403: INFO: Pod "pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195544623s
Sep 17 07:23:55.501: INFO: Pod "pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293209804s
Sep 17 07:23:57.599: INFO: Pod "pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.39137948s
STEP: Saw pod success
Sep 17 07:23:57.599: INFO: Pod "pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1" satisfied condition "Succeeded or Failed"
Sep 17 07:23:57.696: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1 container test-container: <nil>
STEP: delete the pod
Sep 17 07:23:57.895: INFO: Waiting for pod pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1 to disappear
Sep 17 07:23:57.992: INFO: Pod pod-b123ade2-7b40-427e-9d1b-7ac96e5037b1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":2,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:23:58.215: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 56 lines ...
Sep 17 07:23:16.518: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-9775g88wq
STEP: creating a claim
Sep 17 07:23:16.616: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-8fkf
STEP: Creating a pod to test exec-volume-test
Sep 17 07:23:16.913: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-8fkf" in namespace "volume-9775" to be "Succeeded or Failed"
Sep 17 07:23:17.010: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 96.686298ms
Sep 17 07:23:19.108: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194763423s
Sep 17 07:23:21.206: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29306141s
Sep 17 07:23:23.306: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392739906s
Sep 17 07:23:25.403: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489964136s
Sep 17 07:23:27.501: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.588218474s
... skipping 2 lines ...
Sep 17 07:23:33.802: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.888589985s
Sep 17 07:23:35.900: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.987038091s
Sep 17 07:23:37.998: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.085062436s
Sep 17 07:23:40.095: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Pending", Reason="", readiness=false. Elapsed: 23.182092647s
Sep 17 07:23:42.194: INFO: Pod "exec-volume-test-dynamicpv-8fkf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.280709709s
STEP: Saw pod success
Sep 17 07:23:42.194: INFO: Pod "exec-volume-test-dynamicpv-8fkf" satisfied condition "Succeeded or Failed"
Sep 17 07:23:42.292: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod exec-volume-test-dynamicpv-8fkf container exec-container-dynamicpv-8fkf: <nil>
STEP: delete the pod
Sep 17 07:23:42.502: INFO: Waiting for pod exec-volume-test-dynamicpv-8fkf to disappear
Sep 17 07:23:42.602: INFO: Pod exec-volume-test-dynamicpv-8fkf no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-8fkf
Sep 17 07:23:42.602: INFO: Deleting pod "exec-volume-test-dynamicpv-8fkf" in namespace "volume-9775"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:03.910: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 172 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":45,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:24:04.534: INFO: >>> kubeConfig: /root/.kube/config
[It] watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:46
Sep 17 07:24:04.535: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:24:04.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":4,"skipped":45,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:04.856: INFO: Only supported for providers [gce gke] (not aws)
... skipping 66 lines ...
• [SLOW TEST:9.061 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:05.509: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 112 lines ...
Sep 17 07:23:43.781: INFO: PersistentVolumeClaim pvc-nwbt4 found but phase is Pending instead of Bound.
Sep 17 07:23:45.878: INFO: PersistentVolumeClaim pvc-nwbt4 found and phase=Bound (12.689912756s)
Sep 17 07:23:45.878: INFO: Waiting up to 3m0s for PersistentVolume local-ksf4t to have phase Bound
Sep 17 07:23:45.975: INFO: PersistentVolume local-ksf4t found and phase=Bound (97.441825ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8mt7
STEP: Creating a pod to test subpath
Sep 17 07:23:46.272: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8mt7" in namespace "provisioning-7542" to be "Succeeded or Failed"
Sep 17 07:23:46.370: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 97.414578ms
Sep 17 07:23:48.468: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195683922s
Sep 17 07:23:50.565: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29245642s
Sep 17 07:23:52.660: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388216344s
Sep 17 07:23:54.758: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.485336533s
Sep 17 07:23:56.854: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.581322255s
Sep 17 07:23:58.952: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.679421026s
Sep 17 07:24:01.049: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.7769056s
Sep 17 07:24:03.145: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.873046848s
Sep 17 07:24:05.244: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.971762942s
STEP: Saw pod success
Sep 17 07:24:05.244: INFO: Pod "pod-subpath-test-preprovisionedpv-8mt7" satisfied condition "Succeeded or Failed"
Sep 17 07:24:05.340: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-8mt7 container test-container-volume-preprovisionedpv-8mt7: <nil>
STEP: delete the pod
Sep 17 07:24:05.550: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8mt7 to disappear
Sep 17 07:24:05.646: INFO: Pod pod-subpath-test-preprovisionedpv-8mt7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8mt7
Sep 17 07:24:05.646: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8mt7" in namespace "provisioning-7542"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 24 lines ...
Sep 17 07:23:44.776: INFO: PersistentVolumeClaim pvc-qvt9f found but phase is Pending instead of Bound.
Sep 17 07:23:46.873: INFO: PersistentVolumeClaim pvc-qvt9f found and phase=Bound (14.786829267s)
Sep 17 07:23:46.873: INFO: Waiting up to 3m0s for PersistentVolume local-rj4zr to have phase Bound
Sep 17 07:23:46.969: INFO: PersistentVolume local-rj4zr found and phase=Bound (96.056054ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rj7r
STEP: Creating a pod to test subpath
Sep 17 07:23:47.261: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rj7r" in namespace "provisioning-3432" to be "Succeeded or Failed"
Sep 17 07:23:47.357: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 96.561501ms
Sep 17 07:23:49.456: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194726119s
Sep 17 07:23:51.552: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291708503s
Sep 17 07:23:53.649: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388680129s
Sep 17 07:23:55.747: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486209674s
Sep 17 07:23:57.844: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.582827783s
Sep 17 07:23:59.940: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.679133242s
Sep 17 07:24:02.038: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.77673759s
Sep 17 07:24:04.135: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.874395713s
Sep 17 07:24:06.232: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.970958055s
STEP: Saw pod success
Sep 17 07:24:06.232: INFO: Pod "pod-subpath-test-preprovisionedpv-rj7r" satisfied condition "Succeeded or Failed"
Sep 17 07:24:06.328: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-rj7r container test-container-subpath-preprovisionedpv-rj7r: <nil>
STEP: delete the pod
Sep 17 07:24:06.528: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rj7r to disappear
Sep 17 07:24:06.627: INFO: Pod pod-subpath-test-preprovisionedpv-rj7r no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rj7r
Sep 17 07:24:06.628: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rj7r" in namespace "provisioning-3432"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:08.038: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:09.081: INFO: Only supported for providers [vsphere] (not aws)
... skipping 22 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
Sep 17 07:24:05.463: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-d9b36f86-3b48-4501-8fd9-f06c2550aa9a" in namespace "security-context-test-9580" to be "Succeeded or Failed"
Sep 17 07:24:05.560: INFO: Pod "busybox-privileged-true-d9b36f86-3b48-4501-8fd9-f06c2550aa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 96.916453ms
Sep 17 07:24:07.658: INFO: Pod "busybox-privileged-true-d9b36f86-3b48-4501-8fd9-f06c2550aa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194987174s
Sep 17 07:24:09.767: INFO: Pod "busybox-privileged-true-d9b36f86-3b48-4501-8fd9-f06c2550aa9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.303595347s
Sep 17 07:24:09.767: INFO: Pod "busybox-privileged-true-d9b36f86-3b48-4501-8fd9-f06c2550aa9a" satisfied condition "Succeeded or Failed"
Sep 17 07:24:09.867: INFO: Got logs for pod "busybox-privileged-true-d9b36f86-3b48-4501-8fd9-f06c2550aa9a": ""
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:24:09.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9580" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":5,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:10.083: INFO: Only supported for providers [openstack] (not aws)
... skipping 87 lines ...
• [SLOW TEST:24.748 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":3,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:11.591: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 64 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-499ef7e3-67a9-45f0-a892-9068a98ddd80
STEP: Creating a pod to test consume secrets
Sep 17 07:24:07.721: INFO: Waiting up to 5m0s for pod "pod-secrets-e7ca813d-f952-4edf-bd37-db7924252911" in namespace "secrets-4350" to be "Succeeded or Failed"
Sep 17 07:24:07.817: INFO: Pod "pod-secrets-e7ca813d-f952-4edf-bd37-db7924252911": Phase="Pending", Reason="", readiness=false. Elapsed: 95.795488ms
Sep 17 07:24:09.914: INFO: Pod "pod-secrets-e7ca813d-f952-4edf-bd37-db7924252911": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192972767s
Sep 17 07:24:12.012: INFO: Pod "pod-secrets-e7ca813d-f952-4edf-bd37-db7924252911": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.290708296s
STEP: Saw pod success
Sep 17 07:24:12.012: INFO: Pod "pod-secrets-e7ca813d-f952-4edf-bd37-db7924252911" satisfied condition "Succeeded or Failed"
Sep 17 07:24:12.110: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-secrets-e7ca813d-f952-4edf-bd37-db7924252911 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 07:24:12.308: INFO: Waiting for pod pod-secrets-e7ca813d-f952-4edf-bd37-db7924252911 to disappear
Sep 17 07:24:12.403: INFO: Pod pod-secrets-e7ca813d-f952-4edf-bd37-db7924252911 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.548 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:12.615: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:13.742: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 07:23:56.601: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec" in namespace "security-context-test-8604" to be "Succeeded or Failed"
Sep 17 07:23:56.697: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 95.952755ms
Sep 17 07:23:58.794: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193051523s
Sep 17 07:24:00.893: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292586848s
Sep 17 07:24:02.993: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392199162s
Sep 17 07:24:05.090: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489559232s
Sep 17 07:24:07.188: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 10.58698493s
Sep 17 07:24:09.285: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 12.683879341s
Sep 17 07:24:11.382: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 14.780769693s
Sep 17 07:24:13.478: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Pending", Reason="", readiness=false. Elapsed: 16.877328812s
Sep 17 07:24:15.576: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.974701149s
Sep 17 07:24:15.576: INFO: Pod "busybox-readonly-false-eccef563-64e4-4694-b34d-3984edc05eec" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:24:15.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8604" for this suite.


... skipping 145 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:17.935: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
Sep 17 07:23:45.279: INFO: PersistentVolumeClaim pvc-gr8qp found but phase is Pending instead of Bound.
Sep 17 07:23:47.377: INFO: PersistentVolumeClaim pvc-gr8qp found and phase=Bound (6.401755581s)
Sep 17 07:23:47.378: INFO: Waiting up to 3m0s for PersistentVolume aws-lxh75 to have phase Bound
Sep 17 07:23:47.475: INFO: PersistentVolume aws-lxh75 found and phase=Bound (97.390346ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-ph6p
STEP: Creating a pod to test exec-volume-test
Sep 17 07:23:47.767: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-ph6p" in namespace "volume-5549" to be "Succeeded or Failed"
Sep 17 07:23:47.867: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 99.666794ms
Sep 17 07:23:49.965: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1974794s
Sep 17 07:23:52.062: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294485356s
Sep 17 07:23:54.159: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391765804s
Sep 17 07:23:56.261: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.494192201s
Sep 17 07:23:58.358: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.591298057s
Sep 17 07:24:00.460: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.693272195s
Sep 17 07:24:02.557: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.789853476s
Sep 17 07:24:04.655: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Pending", Reason="", readiness=false. Elapsed: 16.887620378s
Sep 17 07:24:06.751: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.984403589s
STEP: Saw pod success
Sep 17 07:24:06.752: INFO: Pod "exec-volume-test-preprovisionedpv-ph6p" satisfied condition "Succeeded or Failed"
Sep 17 07:24:06.848: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-ph6p container exec-container-preprovisionedpv-ph6p: <nil>
STEP: delete the pod
Sep 17 07:24:07.047: INFO: Waiting for pod exec-volume-test-preprovisionedpv-ph6p to disappear
Sep 17 07:24:07.143: INFO: Pod exec-volume-test-preprovisionedpv-ph6p no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-ph6p
Sep 17 07:24:07.143: INFO: Deleting pod "exec-volume-test-preprovisionedpv-ph6p" in namespace "volume-5549"
STEP: Deleting pv and pvc
Sep 17 07:24:07.239: INFO: Deleting PersistentVolumeClaim "pvc-gr8qp"
Sep 17 07:24:07.336: INFO: Deleting PersistentVolume "aws-lxh75"
Sep 17 07:24:07.705: INFO: Couldn't delete PD "aws://eu-west-2a/vol-01d144cab17eba6e4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01d144cab17eba6e4 is currently attached to i-0240c6ba1a0682c2b
	status code: 400, request id: cb3b4ebd-2f6a-4000-b66f-da1c65387ee3
Sep 17 07:24:13.296: INFO: Couldn't delete PD "aws://eu-west-2a/vol-01d144cab17eba6e4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01d144cab17eba6e4 is currently attached to i-0240c6ba1a0682c2b
	status code: 400, request id: c8c99957-4acd-47df-bc47-186dd3666ab5
Sep 17 07:24:18.892: INFO: Successfully deleted PD "aws://eu-west-2a/vol-01d144cab17eba6e4".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:24:18.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5549" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":5,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:19.103: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 104 lines ...
STEP: Deleting pod hostexec-ip-172-20-60-186.eu-west-2.compute.internal-8666g in namespace volumemode-3515
Sep 17 07:24:07.297: INFO: Deleting pod "pod-c0a0653b-02a1-4671-b510-6cc89bf99711" in namespace "volumemode-3515"
Sep 17 07:24:07.398: INFO: Wait up to 5m0s for pod "pod-c0a0653b-02a1-4671-b510-6cc89bf99711" to be fully deleted
STEP: Deleting pv and pvc
Sep 17 07:24:09.595: INFO: Deleting PersistentVolumeClaim "pvc-cwr9r"
Sep 17 07:24:09.695: INFO: Deleting PersistentVolume "aws-5s476"
Sep 17 07:24:10.030: INFO: Couldn't delete PD "aws://eu-west-2a/vol-02af9e4e926bf670e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02af9e4e926bf670e is currently attached to i-0aa984ac7bb70ba77
	status code: 400, request id: e295af2c-62b5-487d-be20-7e69b08208b3
Sep 17 07:24:15.602: INFO: Couldn't delete PD "aws://eu-west-2a/vol-02af9e4e926bf670e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02af9e4e926bf670e is currently attached to i-0aa984ac7bb70ba77
	status code: 400, request id: 488be3ae-762b-4511-ae90-6b7dde037c52
Sep 17 07:24:21.133: INFO: Successfully deleted PD "aws://eu-west-2a/vol-02af9e4e926bf670e".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:24:21.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-3515" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:23:44.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
• [SLOW TEST:36.646 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:206
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":2,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:21.414: INFO: Only supported for providers [gce gke] (not aws)
... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:21.602: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 67 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":3,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:21.948: INFO: Only supported for providers [gce gke] (not aws)
... skipping 38 lines ...
STEP: Destroying namespace "services-1029" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":4,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:23.223: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 07:24:19.800: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59727eb9-1f91-4010-9964-2778b21915e9" in namespace "projected-2502" to be "Succeeded or Failed"
Sep 17 07:24:19.896: INFO: Pod "downwardapi-volume-59727eb9-1f91-4010-9964-2778b21915e9": Phase="Pending", Reason="", readiness=false. Elapsed: 96.144982ms
Sep 17 07:24:21.992: INFO: Pod "downwardapi-volume-59727eb9-1f91-4010-9964-2778b21915e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192638276s
Sep 17 07:24:24.090: INFO: Pod "downwardapi-volume-59727eb9-1f91-4010-9964-2778b21915e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.289957118s
STEP: Saw pod success
Sep 17 07:24:24.090: INFO: Pod "downwardapi-volume-59727eb9-1f91-4010-9964-2778b21915e9" satisfied condition "Succeeded or Failed"
Sep 17 07:24:24.199: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod downwardapi-volume-59727eb9-1f91-4010-9964-2778b21915e9 container client-container: <nil>
STEP: delete the pod
Sep 17 07:24:24.406: INFO: Waiting for pod downwardapi-volume-59727eb9-1f91-4010-9964-2778b21915e9 to disappear
Sep 17 07:24:24.505: INFO: Pod downwardapi-volume-59727eb9-1f91-4010-9964-2778b21915e9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.483 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:24.716: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 367 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 90 lines ...
• [SLOW TEST:70.886 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1232
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:26.756: INFO: Only supported for providers [vsphere] (not aws)
... skipping 23 lines ...
Sep 17 07:24:21.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 17 07:24:22.036: INFO: Waiting up to 5m0s for pod "security-context-1d61d317-94f9-4c08-9ddd-d34c75700fed" in namespace "security-context-6802" to be "Succeeded or Failed"
Sep 17 07:24:22.134: INFO: Pod "security-context-1d61d317-94f9-4c08-9ddd-d34c75700fed": Phase="Pending", Reason="", readiness=false. Elapsed: 97.719984ms
Sep 17 07:24:24.235: INFO: Pod "security-context-1d61d317-94f9-4c08-9ddd-d34c75700fed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199363001s
Sep 17 07:24:26.335: INFO: Pod "security-context-1d61d317-94f9-4c08-9ddd-d34c75700fed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.298673422s
STEP: Saw pod success
Sep 17 07:24:26.335: INFO: Pod "security-context-1d61d317-94f9-4c08-9ddd-d34c75700fed" satisfied condition "Succeeded or Failed"
Sep 17 07:24:26.433: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod security-context-1d61d317-94f9-4c08-9ddd-d34c75700fed container test-container: <nil>
STEP: delete the pod
Sep 17 07:24:26.645: INFO: Waiting for pod security-context-1d61d317-94f9-4c08-9ddd-d34c75700fed to disappear
Sep 17 07:24:26.747: INFO: Pod security-context-1d61d317-94f9-4c08-9ddd-d34c75700fed no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.505 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:26.962: INFO: Only supported for providers [openstack] (not aws)
... skipping 14 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":31,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:24:21.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:7.983 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:29.332: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
• [SLOW TEST:47.166 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:24:31.485: INFO: >>> kubeConfig: /root/.kube/config
... skipping 76 lines ...
Sep 17 07:23:39.241: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-8p5qk] to have phase Bound
Sep 17 07:23:39.337: INFO: PersistentVolumeClaim pvc-8p5qk found and phase=Bound (96.274869ms)
STEP: Deleting the previously created pod
Sep 17 07:24:03.825: INFO: Deleting pod "pvc-volume-tester-g8qml" in namespace "csi-mock-volumes-7861"
Sep 17 07:24:03.922: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g8qml" to be fully deleted
STEP: Checking CSI driver logs
Sep 17 07:24:10.217: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5f03a03a-a5c2-4034-857d-fef0d993333d/volumes/kubernetes.io~csi/pvc-100e8d00-9347-427f-b4bf-892e12b99eb5/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-g8qml
Sep 17 07:24:10.217: INFO: Deleting pod "pvc-volume-tester-g8qml" in namespace "csi-mock-volumes-7861"
STEP: Deleting claim pvc-8p5qk
Sep 17 07:24:10.508: INFO: Waiting up to 2m0s for PersistentVolume pvc-100e8d00-9347-427f-b4bf-892e12b99eb5 to get deleted
Sep 17 07:24:10.605: INFO: PersistentVolume pvc-100e8d00-9347-427f-b4bf-892e12b99eb5 was removed
STEP: Deleting storageclass csi-mock-volumes-7861-sc8pc8j
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497
    token should not be plumbed down when CSIDriver is not deployed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":1,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:32.257: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 54 lines ...
• [SLOW TEST:8.953 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":3,"skipped":40,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:33.717: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:24:34.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2720" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":2,"skipped":18,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:24:15.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
Sep 17 07:24:19.447: INFO: PersistentVolumeClaim pvc-j9wf9 found and phase=Bound (98.278271ms)
Sep 17 07:24:19.447: INFO: Waiting up to 3m0s for PersistentVolume nfs-6v8x6 to have phase Bound
Sep 17 07:24:19.543: INFO: PersistentVolume nfs-6v8x6 found and phase=Bound (95.814713ms)
STEP: Checking pod has write access to PersistentVolume
Sep 17 07:24:19.736: INFO: Creating nfs test pod
Sep 17 07:24:19.833: INFO: Pod should terminate with exitcode 0 (success)
Sep 17 07:24:19.833: INFO: Waiting up to 5m0s for pod "pvc-tester-ghbhw" in namespace "pv-5710" to be "Succeeded or Failed"
Sep 17 07:24:19.929: INFO: Pod "pvc-tester-ghbhw": Phase="Pending", Reason="", readiness=false. Elapsed: 95.835771ms
Sep 17 07:24:22.025: INFO: Pod "pvc-tester-ghbhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192080991s
Sep 17 07:24:24.134: INFO: Pod "pvc-tester-ghbhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.301169306s
STEP: Saw pod success
Sep 17 07:24:24.134: INFO: Pod "pvc-tester-ghbhw" satisfied condition "Succeeded or Failed"
Sep 17 07:24:24.134: INFO: Pod pvc-tester-ghbhw succeeded 
Sep 17 07:24:24.134: INFO: Deleting pod "pvc-tester-ghbhw" in namespace "pv-5710"
Sep 17 07:24:24.246: INFO: Wait up to 5m0s for pod "pvc-tester-ghbhw" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep 17 07:24:24.342: INFO: Deleting PVC pvc-j9wf9 to trigger reclamation of PV 
Sep 17 07:24:24.342: INFO: Deleting PersistentVolumeClaim "pvc-j9wf9"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":4,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:35.335: INFO: Only supported for providers [gce gke] (not aws)
... skipping 44 lines ...
Sep 17 07:24:28.144: INFO: PersistentVolumeClaim pvc-8mjr9 found but phase is Pending instead of Bound.
Sep 17 07:24:30.243: INFO: PersistentVolumeClaim pvc-8mjr9 found and phase=Bound (14.777746981s)
Sep 17 07:24:30.243: INFO: Waiting up to 3m0s for PersistentVolume local-t9pjt to have phase Bound
Sep 17 07:24:30.343: INFO: PersistentVolume local-t9pjt found and phase=Bound (99.907628ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cpqv
STEP: Creating a pod to test subpath
Sep 17 07:24:30.631: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cpqv" in namespace "provisioning-4846" to be "Succeeded or Failed"
Sep 17 07:24:30.727: INFO: Pod "pod-subpath-test-preprovisionedpv-cpqv": Phase="Pending", Reason="", readiness=false. Elapsed: 95.863662ms
Sep 17 07:24:32.823: INFO: Pod "pod-subpath-test-preprovisionedpv-cpqv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191978656s
Sep 17 07:24:34.920: INFO: Pod "pod-subpath-test-preprovisionedpv-cpqv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.288858606s
STEP: Saw pod success
Sep 17 07:24:34.920: INFO: Pod "pod-subpath-test-preprovisionedpv-cpqv" satisfied condition "Succeeded or Failed"
Sep 17 07:24:35.016: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-cpqv container test-container-subpath-preprovisionedpv-cpqv: <nil>
STEP: delete the pod
Sep 17 07:24:35.219: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cpqv to disappear
Sep 17 07:24:35.316: INFO: Pod pod-subpath-test-preprovisionedpv-cpqv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cpqv
Sep 17 07:24:35.317: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cpqv" in namespace "provisioning-4846"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:82.350 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:239
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:7.442 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:15.912 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:24:35.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-5896e5dd-d425-43b9-a998-0ecfd1cab48f
STEP: Creating a pod to test consume configMaps
Sep 17 07:24:36.026: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3" in namespace "projected-7662" to be "Succeeded or Failed"
Sep 17 07:24:36.122: INFO: Pod "pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3": Phase="Pending", Reason="", readiness=false. Elapsed: 96.155873ms
Sep 17 07:24:38.223: INFO: Pod "pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196193963s
Sep 17 07:24:40.323: INFO: Pod "pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297045337s
Sep 17 07:24:42.425: INFO: Pod "pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.398432502s
STEP: Saw pod success
Sep 17 07:24:42.425: INFO: Pod "pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3" satisfied condition "Succeeded or Failed"
Sep 17 07:24:42.521: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 07:24:42.720: INFO: Waiting for pod pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3 to disappear
Sep 17 07:24:42.820: INFO: Pod pod-projected-configmaps-d330a1d5-6502-4819-8af7-b73ae0d5d9e3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.668 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:43.024: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
• [SLOW TEST:12.357 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":3,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Sep 17 07:24:29.483: INFO: PersistentVolumeClaim pvc-p5sj2 found but phase is Pending instead of Bound.
Sep 17 07:24:31.579: INFO: PersistentVolumeClaim pvc-p5sj2 found and phase=Bound (4.288172345s)
Sep 17 07:24:31.579: INFO: Waiting up to 3m0s for PersistentVolume local-gvwgj to have phase Bound
Sep 17 07:24:31.675: INFO: PersistentVolume local-gvwgj found and phase=Bound (95.69479ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-96nz
STEP: Creating a pod to test subpath
Sep 17 07:24:31.980: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-96nz" in namespace "provisioning-9971" to be "Succeeded or Failed"
Sep 17 07:24:32.090: INFO: Pod "pod-subpath-test-preprovisionedpv-96nz": Phase="Pending", Reason="", readiness=false. Elapsed: 109.652143ms
Sep 17 07:24:34.196: INFO: Pod "pod-subpath-test-preprovisionedpv-96nz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215480884s
Sep 17 07:24:36.293: INFO: Pod "pod-subpath-test-preprovisionedpv-96nz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312338264s
Sep 17 07:24:38.390: INFO: Pod "pod-subpath-test-preprovisionedpv-96nz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410129657s
Sep 17 07:24:40.490: INFO: Pod "pod-subpath-test-preprovisionedpv-96nz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.509389274s
Sep 17 07:24:42.587: INFO: Pod "pod-subpath-test-preprovisionedpv-96nz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.606417175s
STEP: Saw pod success
Sep 17 07:24:42.587: INFO: Pod "pod-subpath-test-preprovisionedpv-96nz" satisfied condition "Succeeded or Failed"
Sep 17 07:24:42.682: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-96nz container test-container-subpath-preprovisionedpv-96nz: <nil>
STEP: delete the pod
Sep 17 07:24:42.885: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-96nz to disappear
Sep 17 07:24:42.981: INFO: Pod pod-subpath-test-preprovisionedpv-96nz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-96nz
Sep 17 07:24:42.981: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-96nz" in namespace "provisioning-9971"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:54
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:65
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":4,"skipped":32,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
Sep 17 07:23:53.961: INFO: PersistentVolumeClaim pvc-nz9sc found and phase=Bound (96.194584ms)
Sep 17 07:23:53.961: INFO: Waiting up to 3m0s for PersistentVolume nfs-d9jjb to have phase Bound
Sep 17 07:23:54.059: INFO: PersistentVolume nfs-d9jjb found and phase=Bound (97.159749ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
STEP: Writing to the volume.
Sep 17 07:23:54.348: INFO: Waiting up to 5m0s for pod "pvc-tester-fkgxs" in namespace "pv-8425" to be "Succeeded or Failed"
Sep 17 07:23:54.444: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 96.139749ms
Sep 17 07:23:56.548: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199368382s
Sep 17 07:23:58.645: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296403926s
Sep 17 07:24:00.742: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393871938s
Sep 17 07:24:02.839: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490664827s
Sep 17 07:24:04.935: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.587258844s
Sep 17 07:24:07.032: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.683774055s
Sep 17 07:24:09.129: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.78111735s
Sep 17 07:24:11.226: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.877863837s
Sep 17 07:24:13.324: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.976040777s
Sep 17 07:24:15.421: INFO: Pod "pvc-tester-fkgxs": Phase="Pending", Reason="", readiness=false. Elapsed: 21.072943263s
Sep 17 07:24:17.517: INFO: Pod "pvc-tester-fkgxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.168980207s
STEP: Saw pod success
Sep 17 07:24:17.517: INFO: Pod "pvc-tester-fkgxs" satisfied condition "Succeeded or Failed"
STEP: Deleting the claim
Sep 17 07:24:17.517: INFO: Deleting pod "pvc-tester-fkgxs" in namespace "pv-8425"
Sep 17 07:24:17.621: INFO: Wait up to 5m0s for pod "pvc-tester-fkgxs" to be fully deleted
Sep 17 07:24:17.717: INFO: Deleting PVC pvc-nz9sc to trigger reclamation of PV 
Sep 17 07:24:17.717: INFO: Deleting PersistentVolumeClaim "pvc-nz9sc"
Sep 17 07:24:17.814: INFO: Waiting for reclaim process to complete.
... skipping 3 lines ...
Sep 17 07:24:22.106: INFO: PersistentVolume nfs-d9jjb found and phase=Available (4.292149223s)
Sep 17 07:24:22.205: INFO: PV nfs-d9jjb now in "Available" phase
STEP: Re-mounting the volume.
Sep 17 07:24:22.302: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-88tcv] to have phase Bound
Sep 17 07:24:22.398: INFO: PersistentVolumeClaim pvc-88tcv found and phase=Bound (95.828645ms)
STEP: Verifying the mount has been cleaned.
Sep 17 07:24:22.495: INFO: Waiting up to 5m0s for pod "pvc-tester-fs8dx" in namespace "pv-8425" to be "Succeeded or Failed"
Sep 17 07:24:22.600: INFO: Pod "pvc-tester-fs8dx": Phase="Pending", Reason="", readiness=false. Elapsed: 104.805602ms
Sep 17 07:24:24.698: INFO: Pod "pvc-tester-fs8dx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20252828s
Sep 17 07:24:26.797: INFO: Pod "pvc-tester-fs8dx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301376979s
Sep 17 07:24:28.904: INFO: Pod "pvc-tester-fs8dx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408777737s
Sep 17 07:24:31.000: INFO: Pod "pvc-tester-fs8dx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.505001069s
STEP: Saw pod success
Sep 17 07:24:31.001: INFO: Pod "pvc-tester-fs8dx" satisfied condition "Succeeded or Failed"
Sep 17 07:24:31.001: INFO: Deleting pod "pvc-tester-fs8dx" in namespace "pv-8425"
Sep 17 07:24:31.101: INFO: Wait up to 5m0s for pod "pvc-tester-fs8dx" to be fully deleted
Sep 17 07:24:31.199: INFO: Pod exited without failure; the volume has been recycled.
Sep 17 07:24:31.199: INFO: Removing second PVC, waiting for the recycler to finish before cleanup.
Sep 17 07:24:31.199: INFO: Deleting PVC pvc-88tcv to trigger reclamation of PV 
Sep 17 07:24:31.199: INFO: Deleting PersistentVolumeClaim "pvc-88tcv"
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    when invoking the Recycle reclaim policy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:265
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":3,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
• [SLOW TEST:37.588 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:449
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:51.406: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 216 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:52.167: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 67 lines ...
Sep 17 07:24:51.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Sep 17 07:24:52.097: INFO: Waiting up to 5m0s for pod "security-context-76c879a4-b551-446c-9b1a-c07176affcb2" in namespace "security-context-7676" to be "Succeeded or Failed"
Sep 17 07:24:52.193: INFO: Pod "security-context-76c879a4-b551-446c-9b1a-c07176affcb2": Phase="Pending", Reason="", readiness=false. Elapsed: 95.993323ms
Sep 17 07:24:54.291: INFO: Pod "security-context-76c879a4-b551-446c-9b1a-c07176affcb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.193820534s
STEP: Saw pod success
Sep 17 07:24:54.291: INFO: Pod "security-context-76c879a4-b551-446c-9b1a-c07176affcb2" satisfied condition "Succeeded or Failed"
Sep 17 07:24:54.388: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod security-context-76c879a4-b551-446c-9b1a-c07176affcb2 container test-container: <nil>
STEP: delete the pod
Sep 17 07:24:54.585: INFO: Waiting for pod security-context-76c879a4-b551-446c-9b1a-c07176affcb2 to disappear
Sep 17 07:24:54.682: INFO: Pod security-context-76c879a4-b551-446c-9b1a-c07176affcb2 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:24:54.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-7676" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":4,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:63.240 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a failing exec liveness probe that took longer than the timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:258
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":4,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:24:59.883: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 46 lines ...
• [SLOW TEST:15.104 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":4,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:00.358: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 107 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:25:02.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9475" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:03.105: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 127 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:25:00.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:25:04.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-143" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:04.279: INFO: Only supported for providers [azure] (not aws)
... skipping 84 lines ...
Sep 17 07:24:29.238: INFO: PersistentVolumeClaim csi-hostpathrwq7r found but phase is Pending instead of Bound.
Sep 17 07:24:31.340: INFO: PersistentVolumeClaim csi-hostpathrwq7r found but phase is Pending instead of Bound.
Sep 17 07:24:33.438: INFO: PersistentVolumeClaim csi-hostpathrwq7r found but phase is Pending instead of Bound.
Sep 17 07:24:35.537: INFO: PersistentVolumeClaim csi-hostpathrwq7r found and phase=Bound (8.492553168s)
STEP: Creating pod pod-subpath-test-dynamicpv-frrj
STEP: Creating a pod to test subpath
Sep 17 07:24:35.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-frrj" in namespace "provisioning-1368" to be "Succeeded or Failed"
Sep 17 07:24:35.934: INFO: Pod "pod-subpath-test-dynamicpv-frrj": Phase="Pending", Reason="", readiness=false. Elapsed: 101.327472ms
Sep 17 07:24:38.032: INFO: Pod "pod-subpath-test-dynamicpv-frrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199235022s
Sep 17 07:24:40.130: INFO: Pod "pod-subpath-test-dynamicpv-frrj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297217721s
Sep 17 07:24:42.227: INFO: Pod "pod-subpath-test-dynamicpv-frrj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394427109s
Sep 17 07:24:44.325: INFO: Pod "pod-subpath-test-dynamicpv-frrj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492386751s
Sep 17 07:24:46.430: INFO: Pod "pod-subpath-test-dynamicpv-frrj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.596817936s
Sep 17 07:24:48.528: INFO: Pod "pod-subpath-test-dynamicpv-frrj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.695223007s
STEP: Saw pod success
Sep 17 07:24:48.528: INFO: Pod "pod-subpath-test-dynamicpv-frrj" satisfied condition "Succeeded or Failed"
Sep 17 07:24:48.626: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-frrj container test-container-subpath-dynamicpv-frrj: <nil>
STEP: delete the pod
Sep 17 07:24:48.830: INFO: Waiting for pod pod-subpath-test-dynamicpv-frrj to disappear
Sep 17 07:24:48.939: INFO: Pod pod-subpath-test-dynamicpv-frrj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-frrj
Sep 17 07:24:48.939: INFO: Deleting pod "pod-subpath-test-dynamicpv-frrj" in namespace "provisioning-1368"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:05.856: INFO: Only supported for providers [vsphere] (not aws)
... skipping 81 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:25:02.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:5.457 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:07.669: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 142 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":47,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:07.901: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 159 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":3,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:42.577 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:08.854: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:202
STEP: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:25:10.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-650" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":5,"skipped":48,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:10.808: INFO: Only supported for providers [openstack] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 07:25:03.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9" in namespace "projected-9029" to be "Succeeded or Failed"
Sep 17 07:25:03.936: INFO: Pod "downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9": Phase="Pending", Reason="", readiness=false. Elapsed: 96.346499ms
Sep 17 07:25:06.033: INFO: Pod "downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193701555s
Sep 17 07:25:08.140: INFO: Pod "downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3012165s
Sep 17 07:25:10.237: INFO: Pod "downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.398160968s
STEP: Saw pod success
Sep 17 07:25:10.238: INFO: Pod "downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9" satisfied condition "Succeeded or Failed"
Sep 17 07:25:10.334: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9 container client-container: <nil>
STEP: delete the pod
Sep 17 07:25:10.534: INFO: Waiting for pod downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9 to disappear
Sep 17 07:25:10.630: INFO: Pod downwardapi-volume-5fc785e6-63f6-4d84-ac90-38a53ccd44f9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365

      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:10.830: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":6,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:11.645: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
• [SLOW TEST:19.676 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:11.927: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:25:12.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4976" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":7,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 178 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":2,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:13.755: INFO: Only supported for providers [azure] (not aws)
... skipping 116 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Sep 17 07:25:03.461: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 17 07:25:03.559: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-gzcw
STEP: Creating a pod to test subpath
Sep 17 07:25:03.659: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-gzcw" in namespace "provisioning-7041" to be "Succeeded or Failed"
Sep 17 07:25:03.757: INFO: Pod "pod-subpath-test-inlinevolume-gzcw": Phase="Pending", Reason="", readiness=false. Elapsed: 97.533823ms
Sep 17 07:25:05.855: INFO: Pod "pod-subpath-test-inlinevolume-gzcw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195443363s
Sep 17 07:25:07.953: INFO: Pod "pod-subpath-test-inlinevolume-gzcw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29370061s
Sep 17 07:25:10.051: INFO: Pod "pod-subpath-test-inlinevolume-gzcw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391264094s
Sep 17 07:25:12.148: INFO: Pod "pod-subpath-test-inlinevolume-gzcw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4887715s
Sep 17 07:25:14.247: INFO: Pod "pod-subpath-test-inlinevolume-gzcw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.587979199s
STEP: Saw pod success
Sep 17 07:25:14.247: INFO: Pod "pod-subpath-test-inlinevolume-gzcw" satisfied condition "Succeeded or Failed"
Sep 17 07:25:14.345: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-gzcw container test-container-subpath-inlinevolume-gzcw: <nil>
STEP: delete the pod
Sep 17 07:25:14.555: INFO: Waiting for pod pod-subpath-test-inlinevolume-gzcw to disappear
Sep 17 07:25:14.655: INFO: Pod pod-subpath-test-inlinevolume-gzcw no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-gzcw
Sep 17 07:25:14.655: INFO: Deleting pod "pod-subpath-test-inlinevolume-gzcw" in namespace "provisioning-7041"
... skipping 130 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should verify that all csinodes have volume limits
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:15.741: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":27,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:25:15.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "services-9955" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":7,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Sep 17 07:25:14.040: INFO: PersistentVolumeClaim pvc-x2rh4 found but phase is Pending instead of Bound.
Sep 17 07:25:16.138: INFO: PersistentVolumeClaim pvc-x2rh4 found and phase=Bound (6.39285471s)
Sep 17 07:25:16.138: INFO: Waiting up to 3m0s for PersistentVolume local-54sxv to have phase Bound
Sep 17 07:25:16.237: INFO: PersistentVolume local-54sxv found and phase=Bound (98.744157ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5k2m
STEP: Creating a pod to test subpath
Sep 17 07:25:16.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5k2m" in namespace "provisioning-3947" to be "Succeeded or Failed"
Sep 17 07:25:16.626: INFO: Pod "pod-subpath-test-preprovisionedpv-5k2m": Phase="Pending", Reason="", readiness=false. Elapsed: 97.03242ms
Sep 17 07:25:18.724: INFO: Pod "pod-subpath-test-preprovisionedpv-5k2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195608017s
Sep 17 07:25:20.822: INFO: Pod "pod-subpath-test-preprovisionedpv-5k2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.292919324s
STEP: Saw pod success
Sep 17 07:25:20.822: INFO: Pod "pod-subpath-test-preprovisionedpv-5k2m" satisfied condition "Succeeded or Failed"
Sep 17 07:25:20.920: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-5k2m container test-container-subpath-preprovisionedpv-5k2m: <nil>
STEP: delete the pod
Sep 17 07:25:21.152: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5k2m to disappear
Sep 17 07:25:21.249: INFO: Pod pod-subpath-test-preprovisionedpv-5k2m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5k2m
Sep 17 07:25:21.249: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5k2m" in namespace "provisioning-3947"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:22.637: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
Sep 17 07:25:14.776: INFO: PersistentVolumeClaim pvc-bgczk found but phase is Pending instead of Bound.
Sep 17 07:25:16.873: INFO: PersistentVolumeClaim pvc-bgczk found and phase=Bound (4.291305699s)
Sep 17 07:25:16.873: INFO: Waiting up to 3m0s for PersistentVolume local-gpjkx to have phase Bound
Sep 17 07:25:16.970: INFO: PersistentVolume local-gpjkx found and phase=Bound (96.313814ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4r7f
STEP: Creating a pod to test subpath
Sep 17 07:25:17.263: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4r7f" in namespace "provisioning-8618" to be "Succeeded or Failed"
Sep 17 07:25:17.359: INFO: Pod "pod-subpath-test-preprovisionedpv-4r7f": Phase="Pending", Reason="", readiness=false. Elapsed: 96.563586ms
Sep 17 07:25:19.457: INFO: Pod "pod-subpath-test-preprovisionedpv-4r7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19449779s
Sep 17 07:25:21.556: INFO: Pod "pod-subpath-test-preprovisionedpv-4r7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.29319283s
STEP: Saw pod success
Sep 17 07:25:21.556: INFO: Pod "pod-subpath-test-preprovisionedpv-4r7f" satisfied condition "Succeeded or Failed"
Sep 17 07:25:21.655: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-4r7f container test-container-volume-preprovisionedpv-4r7f: <nil>
STEP: delete the pod
Sep 17 07:25:21.859: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4r7f to disappear
Sep 17 07:25:21.955: INFO: Pod pod-subpath-test-preprovisionedpv-4r7f no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4r7f
Sep 17 07:25:21.955: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4r7f" in namespace "provisioning-8618"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:23.348: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 33 lines ...
STEP: Destroying namespace "apply-1208" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [sig-api-machinery] server version
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:25:24.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename server-version
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:25:25.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-9904" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Sep 17 07:25:16.345: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 17 07:25:16.345: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-sfv9
STEP: Creating a pod to test subpath
Sep 17 07:25:16.445: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-sfv9" in namespace "provisioning-4218" to be "Succeeded or Failed"
Sep 17 07:25:16.550: INFO: Pod "pod-subpath-test-inlinevolume-sfv9": Phase="Pending", Reason="", readiness=false. Elapsed: 104.871135ms
Sep 17 07:25:18.648: INFO: Pod "pod-subpath-test-inlinevolume-sfv9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20230754s
Sep 17 07:25:20.747: INFO: Pod "pod-subpath-test-inlinevolume-sfv9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301417127s
Sep 17 07:25:22.844: INFO: Pod "pod-subpath-test-inlinevolume-sfv9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399095112s
Sep 17 07:25:24.951: INFO: Pod "pod-subpath-test-inlinevolume-sfv9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505717633s
Sep 17 07:25:27.048: INFO: Pod "pod-subpath-test-inlinevolume-sfv9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.603051207s
STEP: Saw pod success
Sep 17 07:25:27.049: INFO: Pod "pod-subpath-test-inlinevolume-sfv9" satisfied condition "Succeeded or Failed"
Sep 17 07:25:27.146: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-sfv9 container test-container-subpath-inlinevolume-sfv9: <nil>
STEP: delete the pod
Sep 17 07:25:27.357: INFO: Waiting for pod pod-subpath-test-inlinevolume-sfv9 to disappear
Sep 17 07:25:27.454: INFO: Pod pod-subpath-test-inlinevolume-sfv9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-sfv9
Sep 17 07:25:27.454: INFO: Deleting pod "pod-subpath-test-inlinevolume-sfv9" in namespace "provisioning-4218"
... skipping 75 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:31.224: INFO: Only supported for providers [vsphere] (not aws)
... skipping 76 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":5,"skipped":17,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:25:32.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4179" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:32.970: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Sep 17 07:25:25.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep 17 07:25:26.222: INFO: Waiting up to 5m0s for pod "test-pod-7457f151-b994-467f-b535-e358ab85a96f" in namespace "svcaccounts-6017" to be "Succeeded or Failed"
Sep 17 07:25:26.319: INFO: Pod "test-pod-7457f151-b994-467f-b535-e358ab85a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 96.6497ms
Sep 17 07:25:28.423: INFO: Pod "test-pod-7457f151-b994-467f-b535-e358ab85a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200829839s
Sep 17 07:25:30.521: INFO: Pod "test-pod-7457f151-b994-467f-b535-e358ab85a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29892084s
Sep 17 07:25:32.633: INFO: Pod "test-pod-7457f151-b994-467f-b535-e358ab85a96f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.411458803s
STEP: Saw pod success
Sep 17 07:25:32.634: INFO: Pod "test-pod-7457f151-b994-467f-b535-e358ab85a96f" satisfied condition "Succeeded or Failed"
Sep 17 07:25:32.730: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod test-pod-7457f151-b994-467f-b535-e358ab85a96f container agnhost-container: <nil>
STEP: delete the pod
Sep 17 07:25:32.937: INFO: Waiting for pod test-pod-7457f151-b994-467f-b535-e358ab85a96f to disappear
Sep 17 07:25:33.033: INFO: Pod test-pod-7457f151-b994-467f-b535-e358ab85a96f no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.595 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":7,"skipped":24,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 94 lines ...
• [SLOW TEST:23.661 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:35.344: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
• [SLOW TEST:22.968 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should create pods for an Indexed job with completion indexes and specified hostname
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:150
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname","total":-1,"completed":4,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:36.267: INFO: Only supported for providers [azure] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-34e2ef52-168e-416d-b73d-b06f28ad47a0
STEP: Creating a pod to test consume secrets
Sep 17 07:25:21.857: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58" in namespace "projected-1509" to be "Succeeded or Failed"
Sep 17 07:25:21.953: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 96.206308ms
Sep 17 07:25:24.059: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202155213s
Sep 17 07:25:26.157: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299360301s
Sep 17 07:25:28.254: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397124606s
Sep 17 07:25:30.352: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49442046s
Sep 17 07:25:32.476: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618652188s
Sep 17 07:25:34.575: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 12.717872457s
Sep 17 07:25:36.673: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.816159771s
STEP: Saw pod success
Sep 17 07:25:36.673: INFO: Pod "pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58" satisfied condition "Succeeded or Failed"
Sep 17 07:25:36.770: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 07:25:36.974: INFO: Waiting for pod pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58 to disappear
Sep 17 07:25:37.075: INFO: Pod pod-projected-secrets-d97b05f5-3864-42e4-b019-eb8798ccfd58 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.097 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:37.280: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":28,"failed":0}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:25:27.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-3e260f18-a2fe-47a5-be8d-a3a4b330e623
STEP: Creating a pod to test consume secrets
Sep 17 07:25:28.551: INFO: Waiting up to 5m0s for pod "pod-secrets-18b732bd-979d-46ef-8ddc-903144987961" in namespace "secrets-3115" to be "Succeeded or Failed"
Sep 17 07:25:28.648: INFO: Pod "pod-secrets-18b732bd-979d-46ef-8ddc-903144987961": Phase="Pending", Reason="", readiness=false. Elapsed: 97.028099ms
Sep 17 07:25:30.746: INFO: Pod "pod-secrets-18b732bd-979d-46ef-8ddc-903144987961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194963309s
Sep 17 07:25:32.844: INFO: Pod "pod-secrets-18b732bd-979d-46ef-8ddc-903144987961": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292685848s
Sep 17 07:25:34.941: INFO: Pod "pod-secrets-18b732bd-979d-46ef-8ddc-903144987961": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389934755s
Sep 17 07:25:37.038: INFO: Pod "pod-secrets-18b732bd-979d-46ef-8ddc-903144987961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.487607592s
STEP: Saw pod success
Sep 17 07:25:37.039: INFO: Pod "pod-secrets-18b732bd-979d-46ef-8ddc-903144987961" satisfied condition "Succeeded or Failed"
Sep 17 07:25:37.141: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-secrets-18b732bd-979d-46ef-8ddc-903144987961 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 07:25:37.349: INFO: Waiting for pod pod-secrets-18b732bd-979d-46ef-8ddc-903144987961 to disappear
Sep 17 07:25:37.447: INFO: Pod pod-secrets-18b732bd-979d-46ef-8ddc-903144987961 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.782 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:37.651: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1355
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1372
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":8,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:39.577: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 124 lines ...
• [SLOW TEST:9.326 seconds]
[sig-scheduling] LimitRange
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":59,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:25:28.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:505
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":8,"skipped":43,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":6,"skipped":59,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:44.731: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 14 lines ...
STEP: Destroying namespace "node-problem-detector-4547" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.689 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 158 lines ...
• [SLOW TEST:23.959 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:280
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":9,"skipped":77,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:46.710: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 136 lines ...
Sep 17 07:25:14.456: INFO: PersistentVolumeClaim pvc-dkdjk found but phase is Pending instead of Bound.
Sep 17 07:25:16.554: INFO: PersistentVolumeClaim pvc-dkdjk found and phase=Bound (14.79666841s)
Sep 17 07:25:16.554: INFO: Waiting up to 3m0s for PersistentVolume local-qpqtg to have phase Bound
Sep 17 07:25:16.651: INFO: PersistentVolume local-qpqtg found and phase=Bound (97.457587ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ns9w
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 07:25:16.949: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ns9w" in namespace "provisioning-9235" to be "Succeeded or Failed"
Sep 17 07:25:17.046: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Pending", Reason="", readiness=false. Elapsed: 97.474212ms
Sep 17 07:25:19.145: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195952305s
Sep 17 07:25:21.244: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294860121s
Sep 17 07:25:23.342: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393640147s
Sep 17 07:25:25.441: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491977719s
Sep 17 07:25:27.539: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.589766265s
... skipping 4 lines ...
Sep 17 07:25:38.043: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Running", Reason="", readiness=true. Elapsed: 21.093892881s
Sep 17 07:25:40.142: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Running", Reason="", readiness=true. Elapsed: 23.19314613s
Sep 17 07:25:42.240: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Running", Reason="", readiness=true. Elapsed: 25.29075814s
Sep 17 07:25:44.337: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Running", Reason="", readiness=true. Elapsed: 27.388358203s
Sep 17 07:25:46.435: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.4862288s
STEP: Saw pod success
Sep 17 07:25:46.435: INFO: Pod "pod-subpath-test-preprovisionedpv-ns9w" satisfied condition "Succeeded or Failed"
Sep 17 07:25:46.532: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-ns9w container test-container-subpath-preprovisionedpv-ns9w: <nil>
STEP: delete the pod
Sep 17 07:25:46.753: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ns9w to disappear
Sep 17 07:25:46.850: INFO: Pod pod-subpath-test-preprovisionedpv-ns9w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ns9w
Sep 17 07:25:46.850: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ns9w" in namespace "provisioning-9235"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:48.262: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:25:48.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8914" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":9,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:48.741: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 174 lines ...
Sep 17 07:24:54.233: INFO: PersistentVolumeClaim csi-hostpathldgxq found but phase is Pending instead of Bound.
Sep 17 07:24:56.331: INFO: PersistentVolumeClaim csi-hostpathldgxq found but phase is Pending instead of Bound.
Sep 17 07:24:58.427: INFO: PersistentVolumeClaim csi-hostpathldgxq found but phase is Pending instead of Bound.
Sep 17 07:25:00.524: INFO: PersistentVolumeClaim csi-hostpathldgxq found and phase=Bound (10.578074495s)
STEP: Creating pod pod-subpath-test-dynamicpv-kj5k
STEP: Creating a pod to test subpath
Sep 17 07:25:00.812: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kj5k" in namespace "provisioning-7784" to be "Succeeded or Failed"
Sep 17 07:25:00.907: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 95.457433ms
Sep 17 07:25:03.005: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193480365s
Sep 17 07:25:05.104: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29245462s
Sep 17 07:25:07.205: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393388796s
Sep 17 07:25:09.301: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489778285s
Sep 17 07:25:11.398: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.586267799s
... skipping 2 lines ...
Sep 17 07:25:17.689: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 16.877392779s
Sep 17 07:25:19.786: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 18.974270518s
Sep 17 07:25:21.882: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 21.070270302s
Sep 17 07:25:23.987: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Pending", Reason="", readiness=false. Elapsed: 23.1750727s
Sep 17 07:25:26.083: INFO: Pod "pod-subpath-test-dynamicpv-kj5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.271557738s
STEP: Saw pod success
Sep 17 07:25:26.083: INFO: Pod "pod-subpath-test-dynamicpv-kj5k" satisfied condition "Succeeded or Failed"
Sep 17 07:25:26.180: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-kj5k container test-container-volume-dynamicpv-kj5k: <nil>
STEP: delete the pod
Sep 17 07:25:26.390: INFO: Waiting for pod pod-subpath-test-dynamicpv-kj5k to disappear
Sep 17 07:25:26.485: INFO: Pod pod-subpath-test-dynamicpv-kj5k no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-kj5k
Sep 17 07:25:26.485: INFO: Deleting pod "pod-subpath-test-dynamicpv-kj5k" in namespace "provisioning-7784"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
Sep 17 07:25:23.059: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:23.156: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:23.446: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:23.543: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:23.641: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:23.740: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:23.934: INFO: Lookups using dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local]

Sep 17 07:25:29.032: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:29.129: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:29.226: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:29.322: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:29.633: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:29.732: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:29.831: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:29.929: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:30.133: INFO: Lookups using dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local]

Sep 17 07:25:34.052: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:34.199: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:34.327: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:34.428: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:34.730: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:34.828: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:34.925: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:35.021: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:35.215: INFO: Lookups using dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local]

Sep 17 07:25:39.034: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:39.131: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:39.228: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:39.325: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:39.615: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:39.712: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:39.809: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:39.908: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:40.102: INFO: Lookups using dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local]

Sep 17 07:25:44.031: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:44.127: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:44.224: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:44.321: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:44.613: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:44.710: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:44.807: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:44.903: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local from pod dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511: the server could not find the requested resource (get pods dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511)
Sep 17 07:25:45.097: INFO: Lookups using dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1397.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1397.svc.cluster.local jessie_udp@dns-test-service-2.dns-1397.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1397.svc.cluster.local]

Sep 17 07:25:50.111: INFO: DNS probes using dns-1397/dns-test-fb5db67f-80e3-473d-bdee-b5e5966db511 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 117 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:673
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":5,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:53.040: INFO: Only supported for providers [gce gke] (not aws)
... skipping 46 lines ...
• [SLOW TEST:10.056 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":87,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:25:55.623: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 07:25:49.496: INFO: The status of Pod server-envvars-469e5592-59c0-468a-98de-2c2795f1d286 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 07:25:51.592: INFO: The status of Pod server-envvars-469e5592-59c0-468a-98de-2c2795f1d286 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 07:25:53.592: INFO: The status of Pod server-envvars-469e5592-59c0-468a-98de-2c2795f1d286 is Running (Ready = true)
Sep 17 07:25:53.885: INFO: Waiting up to 5m0s for pod "client-envvars-a6c5f798-1233-4c7e-94b6-5fcda8dc631e" in namespace "pods-2429" to be "Succeeded or Failed"
Sep 17 07:25:53.981: INFO: Pod "client-envvars-a6c5f798-1233-4c7e-94b6-5fcda8dc631e": Phase="Pending", Reason="", readiness=false. Elapsed: 95.418687ms
Sep 17 07:25:56.078: INFO: Pod "client-envvars-a6c5f798-1233-4c7e-94b6-5fcda8dc631e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192458736s
Sep 17 07:25:58.174: INFO: Pod "client-envvars-a6c5f798-1233-4c7e-94b6-5fcda8dc631e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.288881186s
STEP: Saw pod success
Sep 17 07:25:58.174: INFO: Pod "client-envvars-a6c5f798-1233-4c7e-94b6-5fcda8dc631e" satisfied condition "Succeeded or Failed"
Sep 17 07:25:58.270: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod client-envvars-a6c5f798-1233-4c7e-94b6-5fcda8dc631e container env3cont: <nil>
STEP: delete the pod
Sep 17 07:25:58.468: INFO: Waiting for pod client-envvars-a6c5f798-1233-4c7e-94b6-5fcda8dc631e to disappear
Sep 17 07:25:58.564: INFO: Pod client-envvars-a6c5f798-1233-4c7e-94b6-5fcda8dc631e no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.940 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":62,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 07:25:50.067: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad" in namespace "downward-api-7091" to be "Succeeded or Failed"
Sep 17 07:25:50.163: INFO: Pod "downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad": Phase="Pending", Reason="", readiness=false. Elapsed: 95.592923ms
Sep 17 07:25:52.258: INFO: Pod "downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191424563s
Sep 17 07:25:54.355: INFO: Pod "downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287849826s
Sep 17 07:25:56.451: INFO: Pod "downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.383806842s
Sep 17 07:25:58.548: INFO: Pod "downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.480935694s
STEP: Saw pod success
Sep 17 07:25:58.548: INFO: Pod "downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad" satisfied condition "Succeeded or Failed"
Sep 17 07:25:58.644: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad container client-container: <nil>
STEP: delete the pod
Sep 17 07:25:58.848: INFO: Waiting for pod downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad to disappear
Sep 17 07:25:58.946: INFO: Pod downwardapi-volume-4e3d60be-c830-492d-8280-5e16bf8f2bad no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.649 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:25:58.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should ignore not found error with --for=delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1836
STEP: calling kubectl wait --for=delete
Sep 17 07:25:59.265: INFO: Running '/tmp/kubectl3656349511/kubectl --server=https://api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-194 wait --for=delete pod/doesnotexist'
Sep 17 07:25:59.716: INFO: stderr: ""
Sep 17 07:25:59.716: INFO: stdout: ""
Sep 17 07:25:59.716: INFO: Running '/tmp/kubectl3656349511/kubectl --server=https://api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-194 wait --for=delete pod --selector=app.kubernetes.io/name=noexist'
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:00.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-194" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete","total":-1,"completed":11,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:00.284: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 99 lines ...
• [SLOW TEST:16.391 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:924
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":4,"skipped":73,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:03.236: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 142 lines ...
Sep 17 07:25:47.880: INFO: Waiting for pod aws-client to disappear
Sep 17 07:25:47.977: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 17 07:25:47.977: INFO: Deleting PersistentVolumeClaim "pvc-c5dkh"
Sep 17 07:25:48.075: INFO: Deleting PersistentVolume "aws-mmnss"
Sep 17 07:25:48.400: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0ba5dca947833e6a3", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ba5dca947833e6a3 is currently attached to i-0043fd8147d5de2ae
	status code: 400, request id: 521e849d-a9d5-456b-b2ad-af8cb423697e
Sep 17 07:25:53.943: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0ba5dca947833e6a3", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ba5dca947833e6a3 is currently attached to i-0043fd8147d5de2ae
	status code: 400, request id: f2d1af3c-90e0-457a-9833-78a7784096f8
Sep 17 07:25:59.464: INFO: Couldn't delete PD "aws://eu-west-2a/vol-0ba5dca947833e6a3", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ba5dca947833e6a3 is currently attached to i-0043fd8147d5de2ae
	status code: 400, request id: 7a59ea3b-8b41-4e42-82f6-54539a59dc0d
Sep 17 07:26:05.000: INFO: Successfully deleted PD "aws://eu-west-2a/vol-0ba5dca947833e6a3".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:05.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7222" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":50,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:05.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-716" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":5,"skipped":81,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:05.239: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
Sep 17 07:25:58.474: INFO: PersistentVolumeClaim pvc-zq2dh found but phase is Pending instead of Bound.
Sep 17 07:26:00.572: INFO: PersistentVolumeClaim pvc-zq2dh found and phase=Bound (8.489284052s)
Sep 17 07:26:00.572: INFO: Waiting up to 3m0s for PersistentVolume local-29qbh to have phase Bound
Sep 17 07:26:00.671: INFO: PersistentVolume local-29qbh found and phase=Bound (98.328503ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ddf5
STEP: Creating a pod to test subpath
Sep 17 07:26:00.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ddf5" in namespace "provisioning-1134" to be "Succeeded or Failed"
Sep 17 07:26:01.110: INFO: Pod "pod-subpath-test-preprovisionedpv-ddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 147.29613ms
Sep 17 07:26:03.208: INFO: Pod "pod-subpath-test-preprovisionedpv-ddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244912203s
Sep 17 07:26:05.311: INFO: Pod "pod-subpath-test-preprovisionedpv-ddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347517679s
Sep 17 07:26:07.409: INFO: Pod "pod-subpath-test-preprovisionedpv-ddf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.445960981s
STEP: Saw pod success
Sep 17 07:26:07.409: INFO: Pod "pod-subpath-test-preprovisionedpv-ddf5" satisfied condition "Succeeded or Failed"
Sep 17 07:26:07.506: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-ddf5 container test-container-subpath-preprovisionedpv-ddf5: <nil>
STEP: delete the pod
Sep 17 07:26:07.714: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ddf5 to disappear
Sep 17 07:26:07.811: INFO: Pod pod-subpath-test-preprovisionedpv-ddf5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ddf5
Sep 17 07:26:07.811: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ddf5" in namespace "provisioning-1134"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":52,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:09.232: INFO: Only supported for providers [gce gke] (not aws)
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":50,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Sep 17 07:25:58.482: INFO: PersistentVolumeClaim pvc-hkr5m found but phase is Pending instead of Bound.
Sep 17 07:26:00.580: INFO: PersistentVolumeClaim pvc-hkr5m found and phase=Bound (8.49499924s)
Sep 17 07:26:00.581: INFO: Waiting up to 3m0s for PersistentVolume local-wjjtf to have phase Bound
Sep 17 07:26:00.681: INFO: PersistentVolume local-wjjtf found and phase=Bound (100.359971ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qxss
STEP: Creating a pod to test subpath
Sep 17 07:26:01.023: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qxss" in namespace "provisioning-7096" to be "Succeeded or Failed"
Sep 17 07:26:01.163: INFO: Pod "pod-subpath-test-preprovisionedpv-qxss": Phase="Pending", Reason="", readiness=false. Elapsed: 139.631375ms
Sep 17 07:26:03.261: INFO: Pod "pod-subpath-test-preprovisionedpv-qxss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237861575s
Sep 17 07:26:05.365: INFO: Pod "pod-subpath-test-preprovisionedpv-qxss": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341634052s
Sep 17 07:26:07.463: INFO: Pod "pod-subpath-test-preprovisionedpv-qxss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.438959382s
STEP: Saw pod success
Sep 17 07:26:07.463: INFO: Pod "pod-subpath-test-preprovisionedpv-qxss" satisfied condition "Succeeded or Failed"
Sep 17 07:26:07.560: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-qxss container test-container-volume-preprovisionedpv-qxss: <nil>
STEP: delete the pod
Sep 17 07:26:07.776: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qxss to disappear
Sep 17 07:26:07.872: INFO: Pod pod-subpath-test-preprovisionedpv-qxss no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qxss
Sep 17 07:26:07.872: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qxss" in namespace "provisioning-7096"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:11.327: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 149 lines ...
• [SLOW TEST:39.293 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:915
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":6,"skipped":31,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1582
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":6,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:11.826: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 107 lines ...
Sep 17 07:25:58.484: INFO: PersistentVolumeClaim pvc-nx2c2 found but phase is Pending instead of Bound.
Sep 17 07:26:00.581: INFO: PersistentVolumeClaim pvc-nx2c2 found and phase=Bound (2.193734728s)
Sep 17 07:26:00.581: INFO: Waiting up to 3m0s for PersistentVolume local-w8v9n to have phase Bound
Sep 17 07:26:00.680: INFO: PersistentVolume local-w8v9n found and phase=Bound (98.9252ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-k6wq
STEP: Creating a pod to test subpath
Sep 17 07:26:01.008: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k6wq" in namespace "provisioning-9244" to be "Succeeded or Failed"
Sep 17 07:26:01.125: INFO: Pod "pod-subpath-test-preprovisionedpv-k6wq": Phase="Pending", Reason="", readiness=false. Elapsed: 116.484495ms
Sep 17 07:26:03.222: INFO: Pod "pod-subpath-test-preprovisionedpv-k6wq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214289948s
Sep 17 07:26:05.323: INFO: Pod "pod-subpath-test-preprovisionedpv-k6wq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315380041s
STEP: Saw pod success
Sep 17 07:26:05.324: INFO: Pod "pod-subpath-test-preprovisionedpv-k6wq" satisfied condition "Succeeded or Failed"
Sep 17 07:26:05.422: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-k6wq container test-container-subpath-preprovisionedpv-k6wq: <nil>
STEP: delete the pod
Sep 17 07:26:05.625: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k6wq to disappear
Sep 17 07:26:05.723: INFO: Pod pod-subpath-test-preprovisionedpv-k6wq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k6wq
Sep 17 07:26:05.723: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k6wq" in namespace "provisioning-9244"
STEP: Creating pod pod-subpath-test-preprovisionedpv-k6wq
STEP: Creating a pod to test subpath
Sep 17 07:26:05.917: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k6wq" in namespace "provisioning-9244" to be "Succeeded or Failed"
Sep 17 07:26:06.013: INFO: Pod "pod-subpath-test-preprovisionedpv-k6wq": Phase="Pending", Reason="", readiness=false. Elapsed: 96.144205ms
Sep 17 07:26:08.112: INFO: Pod "pod-subpath-test-preprovisionedpv-k6wq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194669856s
STEP: Saw pod success
Sep 17 07:26:08.112: INFO: Pod "pod-subpath-test-preprovisionedpv-k6wq" satisfied condition "Succeeded or Failed"
Sep 17 07:26:08.208: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-k6wq container test-container-subpath-preprovisionedpv-k6wq: <nil>
STEP: delete the pod
Sep 17 07:26:08.409: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k6wq to disappear
Sep 17 07:26:08.505: INFO: Pod pod-subpath-test-preprovisionedpv-k6wq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k6wq
Sep 17 07:26:08.505: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k6wq" in namespace "provisioning-9244"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Sep 17 07:25:55.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Sep 17 07:25:56.202: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 17 07:25:56.409: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3" in namespace "provisioning-3" to be "Succeeded or Failed"
Sep 17 07:25:56.507: INFO: Pod "hostpath-symlink-prep-provisioning-3": Phase="Pending", Reason="", readiness=false. Elapsed: 98.359465ms
Sep 17 07:25:58.605: INFO: Pod "hostpath-symlink-prep-provisioning-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196107937s
Sep 17 07:26:00.703: INFO: Pod "hostpath-symlink-prep-provisioning-3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293795559s
STEP: Saw pod success
Sep 17 07:26:00.703: INFO: Pod "hostpath-symlink-prep-provisioning-3" satisfied condition "Succeeded or Failed"
Sep 17 07:26:00.703: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3" in namespace "provisioning-3"
Sep 17 07:26:00.804: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3" to be fully deleted
Sep 17 07:26:00.900: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-6ld4
STEP: Creating a pod to test subpath
Sep 17 07:26:01.043: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-6ld4" in namespace "provisioning-3" to be "Succeeded or Failed"
Sep 17 07:26:01.168: INFO: Pod "pod-subpath-test-inlinevolume-6ld4": Phase="Pending", Reason="", readiness=false. Elapsed: 124.857753ms
Sep 17 07:26:03.265: INFO: Pod "pod-subpath-test-inlinevolume-6ld4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221730628s
Sep 17 07:26:05.365: INFO: Pod "pod-subpath-test-inlinevolume-6ld4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322032003s
Sep 17 07:26:07.463: INFO: Pod "pod-subpath-test-inlinevolume-6ld4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.419854286s
STEP: Saw pod success
Sep 17 07:26:07.463: INFO: Pod "pod-subpath-test-inlinevolume-6ld4" satisfied condition "Succeeded or Failed"
Sep 17 07:26:07.560: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-6ld4 container test-container-subpath-inlinevolume-6ld4: <nil>
STEP: delete the pod
Sep 17 07:26:07.768: INFO: Waiting for pod pod-subpath-test-inlinevolume-6ld4 to disappear
Sep 17 07:26:07.864: INFO: Pod pod-subpath-test-inlinevolume-6ld4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-6ld4
Sep 17 07:26:07.864: INFO: Deleting pod "pod-subpath-test-inlinevolume-6ld4" in namespace "provisioning-3"
STEP: Deleting pod
Sep 17 07:26:07.963: INFO: Deleting pod "pod-subpath-test-inlinevolume-6ld4" in namespace "provisioning-3"
Sep 17 07:26:08.156: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3" in namespace "provisioning-3" to be "Succeeded or Failed"
Sep 17 07:26:08.252: INFO: Pod "hostpath-symlink-prep-provisioning-3": Phase="Pending", Reason="", readiness=false. Elapsed: 96.349412ms
Sep 17 07:26:10.352: INFO: Pod "hostpath-symlink-prep-provisioning-3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196002631s
Sep 17 07:26:12.450: INFO: Pod "hostpath-symlink-prep-provisioning-3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293607939s
STEP: Saw pod success
Sep 17 07:26:12.450: INFO: Pod "hostpath-symlink-prep-provisioning-3" satisfied condition "Succeeded or Failed"
Sep 17 07:26:12.450: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3" in namespace "provisioning-3"
Sep 17 07:26:12.553: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:12.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":114,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:12.855: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:24:06.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":4,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity used, have capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":3,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
... skipping 136 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":55,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:15.462: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:16.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-7671" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:16.362: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 139 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":85,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:16.742: INFO: Only supported for providers [gce gke] (not aws)
... skipping 112 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Sep 17 07:26:12.483: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-4421e624-2609-4ff4-8fc0-6fa52f5b6ca6" in namespace "security-context-test-172" to be "Succeeded or Failed"
Sep 17 07:26:12.579: INFO: Pod "alpine-nnp-nil-4421e624-2609-4ff4-8fc0-6fa52f5b6ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 96.081299ms
Sep 17 07:26:14.676: INFO: Pod "alpine-nnp-nil-4421e624-2609-4ff4-8fc0-6fa52f5b6ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192372734s
Sep 17 07:26:16.773: INFO: Pod "alpine-nnp-nil-4421e624-2609-4ff4-8fc0-6fa52f5b6ca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.289739865s
Sep 17 07:26:16.773: INFO: Pod "alpine-nnp-nil-4421e624-2609-4ff4-8fc0-6fa52f5b6ca6" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:16.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-172" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:18.158: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":98,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:18.781: INFO: Only supported for providers [vsphere] (not aws)
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:19.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-4344" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":8,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:19.630: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:20.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-6884" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":9,"skipped":75,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:21.035: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 14 lines ...
Sep 17 07:25:38.147: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-625245lgc
STEP: creating a claim
Sep 17 07:25:38.248: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-knzf
STEP: Creating a pod to test subpath
Sep 17 07:25:38.551: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-knzf" in namespace "provisioning-6252" to be "Succeeded or Failed"
Sep 17 07:25:38.652: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 100.658406ms
Sep 17 07:25:40.750: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199213024s
Sep 17 07:25:42.849: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298023449s
Sep 17 07:25:44.948: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.39685172s
Sep 17 07:25:47.049: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.498164586s
Sep 17 07:25:49.147: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.596349055s
Sep 17 07:25:51.245: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.693967507s
Sep 17 07:25:53.345: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.79421561s
Sep 17 07:25:55.444: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.892712663s
Sep 17 07:25:57.543: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.991976293s
Sep 17 07:25:59.642: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.090542933s
Sep 17 07:26:01.739: INFO: Pod "pod-subpath-test-dynamicpv-knzf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.188313274s
STEP: Saw pod success
Sep 17 07:26:01.740: INFO: Pod "pod-subpath-test-dynamicpv-knzf" satisfied condition "Succeeded or Failed"
Sep 17 07:26:01.837: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-knzf container test-container-subpath-dynamicpv-knzf: <nil>
STEP: delete the pod
Sep 17 07:26:02.057: INFO: Waiting for pod pod-subpath-test-dynamicpv-knzf to disappear
Sep 17 07:26:02.153: INFO: Pod pod-subpath-test-dynamicpv-knzf no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-knzf
Sep 17 07:26:02.154: INFO: Deleting pod "pod-subpath-test-dynamicpv-knzf" in namespace "provisioning-6252"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":11,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Sep 17 07:26:14.281: INFO: PersistentVolumeClaim pvc-lpf2k found but phase is Pending instead of Bound.
Sep 17 07:26:16.378: INFO: PersistentVolumeClaim pvc-lpf2k found and phase=Bound (12.694609947s)
Sep 17 07:26:16.378: INFO: Waiting up to 3m0s for PersistentVolume local-2xckn to have phase Bound
Sep 17 07:26:16.473: INFO: PersistentVolume local-2xckn found and phase=Bound (95.417117ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mm58
STEP: Creating a pod to test subpath
Sep 17 07:26:16.762: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mm58" in namespace "provisioning-9418" to be "Succeeded or Failed"
Sep 17 07:26:16.880: INFO: Pod "pod-subpath-test-preprovisionedpv-mm58": Phase="Pending", Reason="", readiness=false. Elapsed: 118.308754ms
Sep 17 07:26:18.977: INFO: Pod "pod-subpath-test-preprovisionedpv-mm58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215336582s
Sep 17 07:26:21.073: INFO: Pod "pod-subpath-test-preprovisionedpv-mm58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311613788s
STEP: Saw pod success
Sep 17 07:26:21.073: INFO: Pod "pod-subpath-test-preprovisionedpv-mm58" satisfied condition "Succeeded or Failed"
Sep 17 07:26:21.169: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-mm58 container test-container-volume-preprovisionedpv-mm58: <nil>
STEP: delete the pod
Sep 17 07:26:21.407: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mm58 to disappear
Sep 17 07:26:21.510: INFO: Pod pod-subpath-test-preprovisionedpv-mm58 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mm58
Sep 17 07:26:21.510: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mm58" in namespace "provisioning-9418"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":12,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:24.430: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 76 lines ...
• [SLOW TEST:14.550 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":9,"skipped":115,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:27.447: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":22,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:28.375: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 86 lines ...
Sep 17 07:26:21.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Sep 17 07:26:21.696: INFO: Waiting up to 5m0s for pod "client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c" in namespace "containers-4188" to be "Succeeded or Failed"
Sep 17 07:26:21.793: INFO: Pod "client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c": Phase="Pending", Reason="", readiness=false. Elapsed: 96.760089ms
Sep 17 07:26:23.889: INFO: Pod "client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c": Phase="Running", Reason="", readiness=true. Elapsed: 2.19288867s
Sep 17 07:26:25.986: INFO: Pod "client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c": Phase="Running", Reason="", readiness=true. Elapsed: 4.289440399s
Sep 17 07:26:28.085: INFO: Pod "client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.388475379s
STEP: Saw pod success
Sep 17 07:26:28.085: INFO: Pod "client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c" satisfied condition "Succeeded or Failed"
Sep 17 07:26:28.181: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c container agnhost-container: <nil>
STEP: delete the pod
Sep 17 07:26:28.387: INFO: Waiting for pod client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c to disappear
Sep 17 07:26:28.484: INFO: Pod client-containers-ebf12c28-060c-4091-ae2e-6d4db950263c no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.571 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":98,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:26:23.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Sep 17 07:26:24.451: INFO: Waiting up to 5m0s for pod "busybox-user-0-8f6ffc36-5276-42c8-b8fc-7b1362ce36bb" in namespace "security-context-test-942" to be "Succeeded or Failed"
Sep 17 07:26:24.552: INFO: Pod "busybox-user-0-8f6ffc36-5276-42c8-b8fc-7b1362ce36bb": Phase="Pending", Reason="", readiness=false. Elapsed: 100.76032ms
Sep 17 07:26:26.650: INFO: Pod "busybox-user-0-8f6ffc36-5276-42c8-b8fc-7b1362ce36bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198523674s
Sep 17 07:26:28.749: INFO: Pod "busybox-user-0-8f6ffc36-5276-42c8-b8fc-7b1362ce36bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297960304s
Sep 17 07:26:28.749: INFO: Pod "busybox-user-0-8f6ffc36-5276-42c8-b8fc-7b1362ce36bb" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:28.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-942" for this suite.


... skipping 243 lines ...
• [SLOW TEST:16.852 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:30.497: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:26:29.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 17 07:26:29.516: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:31.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2375" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:32.180: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 07:26:28.077: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e3cfab6e-b756-46bf-b804-db64a2833df6" in namespace "security-context-test-8141" to be "Succeeded or Failed"
Sep 17 07:26:28.175: INFO: Pod "busybox-privileged-false-e3cfab6e-b756-46bf-b804-db64a2833df6": Phase="Pending", Reason="", readiness=false. Elapsed: 97.723097ms
Sep 17 07:26:30.274: INFO: Pod "busybox-privileged-false-e3cfab6e-b756-46bf-b804-db64a2833df6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196838862s
Sep 17 07:26:32.371: INFO: Pod "busybox-privileged-false-e3cfab6e-b756-46bf-b804-db64a2833df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294635989s
Sep 17 07:26:32.372: INFO: Pod "busybox-privileged-false-e3cfab6e-b756-46bf-b804-db64a2833df6" satisfied condition "Succeeded or Failed"
Sep 17 07:26:32.470: INFO: Got logs for pod "busybox-privileged-false-e3cfab6e-b756-46bf-b804-db64a2833df6": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:32.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8141" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":122,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:26:32.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 252 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":6,"skipped":24,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:26:30.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep 17 07:26:31.120: INFO: Waiting up to 5m0s for pod "security-context-b209274e-8d28-4b58-8c91-2f70ff41147f" in namespace "security-context-4047" to be "Succeeded or Failed"
Sep 17 07:26:31.218: INFO: Pod "security-context-b209274e-8d28-4b58-8c91-2f70ff41147f": Phase="Pending", Reason="", readiness=false. Elapsed: 97.461642ms
Sep 17 07:26:33.315: INFO: Pod "security-context-b209274e-8d28-4b58-8c91-2f70ff41147f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195136845s
Sep 17 07:26:35.417: INFO: Pod "security-context-b209274e-8d28-4b58-8c91-2f70ff41147f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296819282s
Sep 17 07:26:37.524: INFO: Pod "security-context-b209274e-8d28-4b58-8c91-2f70ff41147f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.404016411s
STEP: Saw pod success
Sep 17 07:26:37.524: INFO: Pod "security-context-b209274e-8d28-4b58-8c91-2f70ff41147f" satisfied condition "Succeeded or Failed"
Sep 17 07:26:37.622: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod security-context-b209274e-8d28-4b58-8c91-2f70ff41147f container test-container: <nil>
STEP: delete the pod
Sep 17 07:26:37.824: INFO: Waiting for pod security-context-b209274e-8d28-4b58-8c91-2f70ff41147f to disappear
Sep 17 07:26:37.921: INFO: Pod security-context-b209274e-8d28-4b58-8c91-2f70ff41147f no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.587 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":6,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:12.373 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:530
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":4,"skipped":43,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:40.864: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 41 lines ...
• [SLOW TEST:29.557 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":7,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:41.607: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
• [SLOW TEST:10.029 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":6,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:42.226: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 558 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":10,"skipped":61,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:26:54.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:54.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1796" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:55.034: INFO: Only supported for providers [openstack] (not aws)
... skipping 14 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":12,"skipped":93,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:26:28.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":93,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":7,"skipped":20,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:26:25.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:26:57.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4714" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":8,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:57.733: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 96 lines ...
Sep 17 07:26:12.434: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6034p7t69
STEP: creating a claim
Sep 17 07:26:12.533: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-dldp
STEP: Creating a pod to test subpath
Sep 17 07:26:12.828: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dldp" in namespace "provisioning-6034" to be "Succeeded or Failed"
Sep 17 07:26:12.927: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 98.428653ms
Sep 17 07:26:15.024: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19561756s
Sep 17 07:26:17.121: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29265072s
Sep 17 07:26:19.218: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389653881s
Sep 17 07:26:21.315: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486864052s
Sep 17 07:26:23.413: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.584889293s
... skipping 2 lines ...
Sep 17 07:26:29.707: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.878889388s
Sep 17 07:26:31.806: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.977548449s
Sep 17 07:26:33.904: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 21.075472421s
Sep 17 07:26:36.001: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Pending", Reason="", readiness=false. Elapsed: 23.173325421s
Sep 17 07:26:38.103: INFO: Pod "pod-subpath-test-dynamicpv-dldp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.275265724s
STEP: Saw pod success
Sep 17 07:26:38.103: INFO: Pod "pod-subpath-test-dynamicpv-dldp" satisfied condition "Succeeded or Failed"
Sep 17 07:26:38.204: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-dldp container test-container-volume-dynamicpv-dldp: <nil>
STEP: delete the pod
Sep 17 07:26:38.408: INFO: Waiting for pod pod-subpath-test-dynamicpv-dldp to disappear
Sep 17 07:26:38.504: INFO: Pod pod-subpath-test-dynamicpv-dldp no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dldp
Sep 17 07:26:38.504: INFO: Deleting pod "pod-subpath-test-dynamicpv-dldp" in namespace "provisioning-6034"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:26:59.788: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 227 lines ...
• [SLOW TEST:45.084 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":7,"skipped":104,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":90,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:07.245: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 128 lines ...
• [SLOW TEST:91.897 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:25:50.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:78.748 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:319
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":5,"skipped":28,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 59 lines ...
Sep 17 07:26:28.962: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathk2shw] to have phase Bound
Sep 17 07:26:29.059: INFO: PersistentVolumeClaim csi-hostpathk2shw found but phase is Pending instead of Bound.
Sep 17 07:26:31.159: INFO: PersistentVolumeClaim csi-hostpathk2shw found but phase is Pending instead of Bound.
Sep 17 07:26:33.257: INFO: PersistentVolumeClaim csi-hostpathk2shw found and phase=Bound (4.294909405s)
STEP: Creating pod pod-subpath-test-dynamicpv-dzxc
STEP: Creating a pod to test subpath
Sep 17 07:26:33.550: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dzxc" in namespace "provisioning-5962" to be "Succeeded or Failed"
Sep 17 07:26:33.647: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 97.313557ms
Sep 17 07:26:35.746: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195773704s
Sep 17 07:26:37.848: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298412584s
Sep 17 07:26:39.946: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395995712s
Sep 17 07:26:42.044: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.494217711s
Sep 17 07:26:44.141: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.591449166s
Sep 17 07:26:46.239: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.68897823s
Sep 17 07:26:48.336: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.786392457s
Sep 17 07:26:50.437: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.886820693s
Sep 17 07:26:52.535: INFO: Pod "pod-subpath-test-dynamicpv-dzxc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.985443799s
STEP: Saw pod success
Sep 17 07:26:52.535: INFO: Pod "pod-subpath-test-dynamicpv-dzxc" satisfied condition "Succeeded or Failed"
Sep 17 07:26:52.632: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-dzxc container test-container-volume-dynamicpv-dzxc: <nil>
STEP: delete the pod
Sep 17 07:26:52.837: INFO: Waiting for pod pod-subpath-test-dynamicpv-dzxc to disappear
Sep 17 07:26:52.936: INFO: Pod pod-subpath-test-dynamicpv-dzxc no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dzxc
Sep 17 07:26:52.936: INFO: Deleting pod "pod-subpath-test-dynamicpv-dzxc" in namespace "provisioning-5962"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":11,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:09.877: INFO: Only supported for providers [azure] (not aws)
... skipping 30 lines ...
Sep 17 07:26:38.631: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Sep 17 07:26:39.306: INFO: Successfully created a new PD: "aws://eu-west-2a/vol-03ae39d78140347dd".
Sep 17 07:26:39.307: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-tj7f
STEP: Creating a pod to test exec-volume-test
Sep 17 07:26:39.406: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-tj7f" in namespace "volume-6354" to be "Succeeded or Failed"
Sep 17 07:26:39.504: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 97.189437ms
Sep 17 07:26:41.601: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194577946s
Sep 17 07:26:43.718: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311611677s
Sep 17 07:26:45.819: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41220015s
Sep 17 07:26:47.918: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511365914s
Sep 17 07:26:50.016: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.609990227s
Sep 17 07:26:52.115: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.708647381s
Sep 17 07:26:54.214: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.80735223s
Sep 17 07:26:56.312: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.905606287s
Sep 17 07:26:58.410: INFO: Pod "exec-volume-test-inlinevolume-tj7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.004079771s
STEP: Saw pod success
Sep 17 07:26:58.411: INFO: Pod "exec-volume-test-inlinevolume-tj7f" satisfied condition "Succeeded or Failed"
Sep 17 07:26:58.508: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod exec-volume-test-inlinevolume-tj7f container exec-container-inlinevolume-tj7f: <nil>
STEP: delete the pod
Sep 17 07:26:58.709: INFO: Waiting for pod exec-volume-test-inlinevolume-tj7f to disappear
Sep 17 07:26:58.806: INFO: Pod exec-volume-test-inlinevolume-tj7f no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-tj7f
Sep 17 07:26:58.806: INFO: Deleting pod "exec-volume-test-inlinevolume-tj7f" in namespace "volume-6354"
Sep 17 07:26:59.122: INFO: Couldn't delete PD "aws://eu-west-2a/vol-03ae39d78140347dd", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03ae39d78140347dd is currently attached to i-08e49c3e403a3ad35
	status code: 400, request id: 9ae32cfa-1b67-43ed-95a4-0276c5fb05d5
Sep 17 07:27:04.717: INFO: Couldn't delete PD "aws://eu-west-2a/vol-03ae39d78140347dd", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03ae39d78140347dd is currently attached to i-08e49c3e403a3ad35
	status code: 400, request id: 4c4db3d5-72ac-46b7-b9cb-12d6aedd0d22
Sep 17 07:27:10.242: INFO: Successfully deleted PD "aws://eu-west-2a/vol-03ae39d78140347dd".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:10.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6354" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:10.467: INFO: Only supported for providers [gce gke] (not aws)
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-h9p9
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 07:26:44.663: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-h9p9" in namespace "subpath-9909" to be "Succeeded or Failed"
Sep 17 07:26:44.761: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Pending", Reason="", readiness=false. Elapsed: 97.861894ms
Sep 17 07:26:46.861: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197205942s
Sep 17 07:26:48.959: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295583619s
Sep 17 07:26:51.059: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Running", Reason="", readiness=true. Elapsed: 6.39511418s
Sep 17 07:26:53.157: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Running", Reason="", readiness=true. Elapsed: 8.493247003s
Sep 17 07:26:55.256: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Running", Reason="", readiness=true. Elapsed: 10.592525956s
... skipping 2 lines ...
Sep 17 07:27:01.572: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Running", Reason="", readiness=true. Elapsed: 16.90879128s
Sep 17 07:27:03.671: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Running", Reason="", readiness=true. Elapsed: 19.007318007s
Sep 17 07:27:05.769: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Running", Reason="", readiness=true. Elapsed: 21.10539319s
Sep 17 07:27:07.868: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Running", Reason="", readiness=true. Elapsed: 23.204588174s
Sep 17 07:27:09.966: INFO: Pod "pod-subpath-test-secret-h9p9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.302904486s
STEP: Saw pod success
Sep 17 07:27:09.966: INFO: Pod "pod-subpath-test-secret-h9p9" satisfied condition "Succeeded or Failed"
Sep 17 07:27:10.064: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod pod-subpath-test-secret-h9p9 container test-container-subpath-secret-h9p9: <nil>
STEP: delete the pod
Sep 17 07:27:10.273: INFO: Waiting for pod pod-subpath-test-secret-h9p9 to disappear
Sep 17 07:27:10.371: INFO: Pod pod-subpath-test-secret-h9p9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-h9p9
Sep 17 07:27:10.371: INFO: Deleting pod "pod-subpath-test-secret-h9p9" in namespace "subpath-9909"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 69 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-7fd54f98-3ea7-4253-af42-325fcfc51cc7
STEP: Creating a pod to test consume secrets
Sep 17 07:27:02.622: INFO: Waiting up to 5m0s for pod "pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b" in namespace "secrets-1568" to be "Succeeded or Failed"
Sep 17 07:27:02.718: INFO: Pod "pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b": Phase="Pending", Reason="", readiness=false. Elapsed: 96.107438ms
Sep 17 07:27:04.816: INFO: Pod "pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193570282s
Sep 17 07:27:06.913: INFO: Pod "pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290461449s
Sep 17 07:27:09.010: INFO: Pod "pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388030852s
Sep 17 07:27:11.107: INFO: Pod "pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.484599176s
Sep 17 07:27:13.205: INFO: Pod "pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.582583194s
STEP: Saw pod success
Sep 17 07:27:13.205: INFO: Pod "pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b" satisfied condition "Succeeded or Failed"
Sep 17 07:27:13.302: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 07:27:13.503: INFO: Waiting for pod pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b to disappear
Sep 17 07:27:13.599: INFO: Pod pod-secrets-7e4302f0-2f12-4ced-809b-7fd167be935b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 44 lines ...
• [SLOW TEST:13.213 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":8,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Sep 17 07:26:44.750: INFO: PersistentVolumeClaim pvc-cfwfb found but phase is Pending instead of Bound.
Sep 17 07:26:46.847: INFO: PersistentVolumeClaim pvc-cfwfb found and phase=Bound (2.193370168s)
Sep 17 07:26:46.847: INFO: Waiting up to 3m0s for PersistentVolume local-httrn to have phase Bound
Sep 17 07:26:46.944: INFO: PersistentVolume local-httrn found and phase=Bound (96.423951ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9brk
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 07:26:47.235: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9brk" in namespace "provisioning-3069" to be "Succeeded or Failed"
Sep 17 07:26:47.333: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Pending", Reason="", readiness=false. Elapsed: 97.321292ms
Sep 17 07:26:49.431: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195304855s
Sep 17 07:26:51.528: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Running", Reason="", readiness=true. Elapsed: 4.292421987s
Sep 17 07:26:53.625: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Running", Reason="", readiness=true. Elapsed: 6.389857429s
Sep 17 07:26:55.726: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Running", Reason="", readiness=true. Elapsed: 8.49089314s
Sep 17 07:26:57.823: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Running", Reason="", readiness=true. Elapsed: 10.587829959s
... skipping 2 lines ...
Sep 17 07:27:04.116: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Running", Reason="", readiness=true. Elapsed: 16.88062008s
Sep 17 07:27:06.214: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Running", Reason="", readiness=true. Elapsed: 18.978681488s
Sep 17 07:27:08.312: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Running", Reason="", readiness=true. Elapsed: 21.076539236s
Sep 17 07:27:10.410: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Running", Reason="", readiness=true. Elapsed: 23.174644169s
Sep 17 07:27:12.508: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.272398387s
STEP: Saw pod success
Sep 17 07:27:12.508: INFO: Pod "pod-subpath-test-preprovisionedpv-9brk" satisfied condition "Succeeded or Failed"
Sep 17 07:27:12.605: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-9brk container test-container-subpath-preprovisionedpv-9brk: <nil>
STEP: delete the pod
Sep 17 07:27:12.809: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9brk to disappear
Sep 17 07:27:12.907: INFO: Pod pod-subpath-test-preprovisionedpv-9brk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9brk
Sep 17 07:27:12.907: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9brk" in namespace "provisioning-3069"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:15.005: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":107,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:27:13.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "apply-5588" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":9,"skipped":107,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:15.256: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":6,"skipped":54,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
Sep 17 07:26:43.425: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-h6rjg] to have phase Bound
Sep 17 07:26:43.522: INFO: PersistentVolumeClaim pvc-h6rjg found and phase=Bound (96.560402ms)
STEP: Deleting the previously created pod
Sep 17 07:26:50.010: INFO: Deleting pod "pvc-volume-tester-jqt7f" in namespace "csi-mock-volumes-6890"
Sep 17 07:26:50.109: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jqt7f" to be fully deleted
STEP: Checking CSI driver logs
Sep 17 07:26:52.406: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/887debae-ace5-4abf-985b-7d221af01e85/volumes/kubernetes.io~csi/pvc-36d9ec66-300f-4ea4-b49a-34c5ca4b92ab/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-jqt7f
Sep 17 07:26:52.407: INFO: Deleting pod "pvc-volume-tester-jqt7f" in namespace "csi-mock-volumes-6890"
STEP: Deleting claim pvc-h6rjg
Sep 17 07:26:52.699: INFO: Waiting up to 2m0s for PersistentVolume pvc-36d9ec66-300f-4ea4-b49a-34c5ca4b92ab to get deleted
Sep 17 07:26:52.796: INFO: PersistentVolume pvc-36d9ec66-300f-4ea4-b49a-34c5ca4b92ab found and phase=Released (96.312777ms)
Sep 17 07:26:54.893: INFO: PersistentVolume pvc-36d9ec66-300f-4ea4-b49a-34c5ca4b92ab found and phase=Released (2.193459849s)
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":11,"skipped":134,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:17.023: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 195 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should modify fsGroup if fsGroupPolicy=File
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":7,"skipped":53,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:20.808: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
STEP: Destroying namespace "services-7738" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":9,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:22.904: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:27:19.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:22.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3276" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":9,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
STEP: updating the pod
Sep 17 07:27:19.005: INFO: Successfully updated pod "pod-update-activedeadlineseconds-88d52114-003b-4e36-938a-5e3851f503a0"
Sep 17 07:27:19.006: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-88d52114-003b-4e36-938a-5e3851f503a0" in namespace "pods-3967" to be "terminated due to deadline exceeded"
Sep 17 07:27:19.102: INFO: Pod "pod-update-activedeadlineseconds-88d52114-003b-4e36-938a-5e3851f503a0": Phase="Running", Reason="", readiness=true. Elapsed: 96.443323ms
Sep 17 07:27:21.199: INFO: Pod "pod-update-activedeadlineseconds-88d52114-003b-4e36-938a-5e3851f503a0": Phase="Running", Reason="", readiness=true. Elapsed: 2.193782998s
Sep 17 07:27:23.296: INFO: Pod "pod-update-activedeadlineseconds-88d52114-003b-4e36-938a-5e3851f503a0": Phase="Running", Reason="", readiness=true. Elapsed: 4.290504589s
Sep 17 07:27:25.402: INFO: Pod "pod-update-activedeadlineseconds-88d52114-003b-4e36-938a-5e3851f503a0": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 6.396020947s
Sep 17 07:27:25.402: INFO: Pod "pod-update-activedeadlineseconds-88d52114-003b-4e36-938a-5e3851f503a0" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:25.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3967" for this suite.


• [SLOW TEST:16.269 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:25.611: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 152 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:26.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-3632" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":10,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 112 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":9,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Pod Disks
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: Destroying namespace "pod-disks-2333" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.688 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 40 lines ...
• [SLOW TEST:18.930 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:28.153: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 17 07:27:23.593: INFO: Waiting up to 5m0s for pod "pod-f290b1f6-af9c-4f90-b1ee-47698261aa14" in namespace "emptydir-1216" to be "Succeeded or Failed"
Sep 17 07:27:23.689: INFO: Pod "pod-f290b1f6-af9c-4f90-b1ee-47698261aa14": Phase="Pending", Reason="", readiness=false. Elapsed: 96.135723ms
Sep 17 07:27:25.786: INFO: Pod "pod-f290b1f6-af9c-4f90-b1ee-47698261aa14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19296538s
Sep 17 07:27:27.884: INFO: Pod "pod-f290b1f6-af9c-4f90-b1ee-47698261aa14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.291073206s
STEP: Saw pod success
Sep 17 07:27:27.884: INFO: Pod "pod-f290b1f6-af9c-4f90-b1ee-47698261aa14" satisfied condition "Succeeded or Failed"
Sep 17 07:27:27.983: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-f290b1f6-af9c-4f90-b1ee-47698261aa14 container test-container: <nil>
STEP: delete the pod
Sep 17 07:27:28.202: INFO: Waiting for pod pod-f290b1f6-af9c-4f90-b1ee-47698261aa14 to disappear
Sep 17 07:27:28.313: INFO: Pod pod-f290b1f6-af9c-4f90-b1ee-47698261aa14 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is non-root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":10,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:27:15.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 07:27:16.513: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c" in namespace "security-context-test-4472" to be "Succeeded or Failed"
Sep 17 07:27:16.612: INFO: Pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c": Phase="Pending", Reason="", readiness=false. Elapsed: 98.426723ms
Sep 17 07:27:18.709: INFO: Pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195269912s
Sep 17 07:27:20.806: INFO: Pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292671684s
Sep 17 07:27:22.902: INFO: Pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389079547s
Sep 17 07:27:25.001: INFO: Pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487776975s
Sep 17 07:27:27.100: INFO: Pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.586369667s
Sep 17 07:27:29.197: INFO: Pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.683877116s
Sep 17 07:27:29.197: INFO: Pod "alpine-nnp-false-02d59d13-af70-47ad-896d-330f02f8f24c" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:29.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4472" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":56,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:29.510: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":11,"skipped":103,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:29.975: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":5,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:30.301: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:30.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5892" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":8,"skipped":62,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 07:27:17.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f" in namespace "downward-api-85" to be "Succeeded or Failed"
Sep 17 07:27:17.761: INFO: Pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f": Phase="Pending", Reason="", readiness=false. Elapsed: 96.446824ms
Sep 17 07:27:19.860: INFO: Pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194958669s
Sep 17 07:27:21.957: INFO: Pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292313569s
Sep 17 07:27:24.054: INFO: Pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389456219s
Sep 17 07:27:26.152: INFO: Pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487280528s
Sep 17 07:27:28.249: INFO: Pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.584719568s
Sep 17 07:27:30.347: INFO: Pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.681808036s
STEP: Saw pod success
Sep 17 07:27:30.347: INFO: Pod "downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f" satisfied condition "Succeeded or Failed"
Sep 17 07:27:30.443: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f container client-container: <nil>
STEP: delete the pod
Sep 17 07:27:30.661: INFO: Waiting for pod downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f to disappear
Sep 17 07:27:30.759: INFO: Pod downwardapi-volume-31812aa8-a4e3-481c-8af0-809a9e76ea7f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.871 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":150,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":10,"skipped":101,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:31.249: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Sep 17 07:27:28.839: INFO: Waiting up to 5m0s for pod "metadata-volume-c9246797-961a-4a5a-b214-76da16bec94b" in namespace "downward-api-2959" to be "Succeeded or Failed"
Sep 17 07:27:28.939: INFO: Pod "metadata-volume-c9246797-961a-4a5a-b214-76da16bec94b": Phase="Pending", Reason="", readiness=false. Elapsed: 100.295016ms
Sep 17 07:27:31.037: INFO: Pod "metadata-volume-c9246797-961a-4a5a-b214-76da16bec94b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197859782s
Sep 17 07:27:33.134: INFO: Pod "metadata-volume-c9246797-961a-4a5a-b214-76da16bec94b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294927616s
STEP: Saw pod success
Sep 17 07:27:33.134: INFO: Pod "metadata-volume-c9246797-961a-4a5a-b214-76da16bec94b" satisfied condition "Succeeded or Failed"
Sep 17 07:27:33.231: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod metadata-volume-c9246797-961a-4a5a-b214-76da16bec94b container client-container: <nil>
STEP: delete the pod
Sep 17 07:27:33.441: INFO: Waiting for pod metadata-volume-c9246797-961a-4a5a-b214-76da16bec94b to disappear
Sep 17 07:27:33.539: INFO: Pod metadata-volume-c9246797-961a-4a5a-b214-76da16bec94b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.564 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":34,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:33.765: INFO: Only supported for providers [azure] (not aws)
... skipping 166 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-b580f16c-a645-4d9e-bb4e-a2a8e7542d32
STEP: Creating a pod to test consume secrets
Sep 17 07:27:30.684: INFO: Waiting up to 5m0s for pod "pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3" in namespace "secrets-8161" to be "Succeeded or Failed"
Sep 17 07:27:30.780: INFO: Pod "pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3": Phase="Pending", Reason="", readiness=false. Elapsed: 96.737337ms
Sep 17 07:27:32.879: INFO: Pod "pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195664789s
Sep 17 07:27:34.977: INFO: Pod "pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29333014s
Sep 17 07:27:37.073: INFO: Pod "pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.389818138s
STEP: Saw pod success
Sep 17 07:27:37.073: INFO: Pod "pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3" satisfied condition "Succeeded or Failed"
Sep 17 07:27:37.171: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 07:27:37.369: INFO: Waiting for pod pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3 to disappear
Sep 17 07:27:37.465: INFO: Pod pod-secrets-a8e79058-f3eb-4fbe-ab6c-d17c1b9751c3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":11,"skipped":109,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:42.274: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Sep 17 07:27:30.203: INFO: PersistentVolumeClaim pvc-zfs7d found but phase is Pending instead of Bound.
Sep 17 07:27:32.300: INFO: PersistentVolumeClaim pvc-zfs7d found and phase=Bound (10.593485758s)
Sep 17 07:27:32.300: INFO: Waiting up to 3m0s for PersistentVolume local-l5sv8 to have phase Bound
Sep 17 07:27:32.397: INFO: PersistentVolume local-l5sv8 found and phase=Bound (96.711066ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xn5h
STEP: Creating a pod to test subpath
Sep 17 07:27:32.691: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xn5h" in namespace "provisioning-2453" to be "Succeeded or Failed"
Sep 17 07:27:32.788: INFO: Pod "pod-subpath-test-preprovisionedpv-xn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 97.454643ms
Sep 17 07:27:34.885: INFO: Pod "pod-subpath-test-preprovisionedpv-xn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194418375s
Sep 17 07:27:36.983: INFO: Pod "pod-subpath-test-preprovisionedpv-xn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29196997s
Sep 17 07:27:39.081: INFO: Pod "pod-subpath-test-preprovisionedpv-xn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390059123s
Sep 17 07:27:41.178: INFO: Pod "pod-subpath-test-preprovisionedpv-xn5h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.48739627s
STEP: Saw pod success
Sep 17 07:27:41.178: INFO: Pod "pod-subpath-test-preprovisionedpv-xn5h" satisfied condition "Succeeded or Failed"
Sep 17 07:27:41.275: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-xn5h container test-container-subpath-preprovisionedpv-xn5h: <nil>
STEP: delete the pod
Sep 17 07:27:41.475: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xn5h to disappear
Sep 17 07:27:41.597: INFO: Pod pod-subpath-test-preprovisionedpv-xn5h no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xn5h
Sep 17 07:27:41.597: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xn5h" in namespace "provisioning-2453"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":108,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:42.974: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 105 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":5,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:43.375: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:43.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5123" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":10,"skipped":125,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:43.945: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 44 lines ...
• [SLOW TEST:14.072 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 17 07:27:01.007: INFO: File wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:01.105: INFO: File jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:01.105: INFO: Lookups using dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd failed for: [wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local]

Sep 17 07:27:06.203: INFO: File wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:06.300: INFO: File jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:06.300: INFO: Lookups using dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd failed for: [wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local]

Sep 17 07:27:11.204: INFO: File wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:11.303: INFO: File jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:11.303: INFO: Lookups using dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd failed for: [wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local]

Sep 17 07:27:16.203: INFO: File wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:16.302: INFO: File jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:16.302: INFO: Lookups using dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd failed for: [wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local]

Sep 17 07:27:21.202: INFO: File wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:21.300: INFO: File jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:21.300: INFO: Lookups using dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd failed for: [wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local]

Sep 17 07:27:26.202: INFO: File wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:26.300: INFO: File jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local from pod  dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 07:27:26.300: INFO: Lookups using dns-9719/dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd failed for: [wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local jessie_udp@dns-test-service-3.dns-9719.svc.cluster.local]

Sep 17 07:27:31.306: INFO: DNS probes using dns-test-f9c19d43-ef1c-4885-addb-c1c522c812fd succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9719.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9719.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
• [SLOW TEST:49.680 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":12,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
Sep 17 07:27:44.246: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 17 07:27:44.246: INFO: stdout: "scheduler etcd-0 controller-manager etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Sep 17 07:27:44.246: INFO: Running '/tmp/kubectl3656349511/kubectl --server=https://api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5966 get componentstatuses scheduler'
Sep 17 07:27:44.611: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 17 07:27:44.611: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-0
Sep 17 07:27:44.611: INFO: Running '/tmp/kubectl3656349511/kubectl --server=https://api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5966 get componentstatuses etcd-0'
Sep 17 07:27:44.984: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 17 07:27:44.984: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-0   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
STEP: getting status of controller-manager
Sep 17 07:27:44.984: INFO: Running '/tmp/kubectl3656349511/kubectl --server=https://api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5966 get componentstatuses controller-manager'
Sep 17 07:27:45.353: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 17 07:27:45.353: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-1
Sep 17 07:27:45.353: INFO: Running '/tmp/kubectl3656349511/kubectl --server=https://api.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5966 get componentstatuses etcd-1'
Sep 17 07:27:45.718: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 17 07:27:45.718: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-1   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:45.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5966" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":6,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:45.926: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:27:13.028: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Sep 17 07:27:29.454: INFO: PersistentVolumeClaim pvc-76slz found but phase is Pending instead of Bound.
Sep 17 07:27:31.568: INFO: PersistentVolumeClaim pvc-76slz found and phase=Bound (14.799649224s)
Sep 17 07:27:31.568: INFO: Waiting up to 3m0s for PersistentVolume local-lg6x2 to have phase Bound
Sep 17 07:27:31.665: INFO: PersistentVolume local-lg6x2 found and phase=Bound (96.767463ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sr6b
STEP: Creating a pod to test subpath
Sep 17 07:27:31.957: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sr6b" in namespace "provisioning-3330" to be "Succeeded or Failed"
Sep 17 07:27:32.054: INFO: Pod "pod-subpath-test-preprovisionedpv-sr6b": Phase="Pending", Reason="", readiness=false. Elapsed: 97.131824ms
Sep 17 07:27:34.152: INFO: Pod "pod-subpath-test-preprovisionedpv-sr6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194540285s
Sep 17 07:27:36.248: INFO: Pod "pod-subpath-test-preprovisionedpv-sr6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291302085s
Sep 17 07:27:38.346: INFO: Pod "pod-subpath-test-preprovisionedpv-sr6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388685145s
Sep 17 07:27:40.442: INFO: Pod "pod-subpath-test-preprovisionedpv-sr6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.485519843s
Sep 17 07:27:42.540: INFO: Pod "pod-subpath-test-preprovisionedpv-sr6b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.582688652s
Sep 17 07:27:44.638: INFO: Pod "pod-subpath-test-preprovisionedpv-sr6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.681035275s
STEP: Saw pod success
Sep 17 07:27:44.638: INFO: Pod "pod-subpath-test-preprovisionedpv-sr6b" satisfied condition "Succeeded or Failed"
Sep 17 07:27:44.741: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-sr6b container test-container-volume-preprovisionedpv-sr6b: <nil>
STEP: delete the pod
Sep 17 07:27:44.957: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sr6b to disappear
Sep 17 07:27:45.053: INFO: Pod pod-subpath-test-preprovisionedpv-sr6b no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sr6b
Sep 17 07:27:45.053: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sr6b" in namespace "provisioning-3330"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Ingress API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:27:48.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-8216" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":7,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:49.001: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 27 lines ...
Sep 17 07:27:10.380: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-4912g88z9
STEP: creating a claim
Sep 17 07:27:10.482: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-4ztz
STEP: Creating a pod to test exec-volume-test
Sep 17 07:27:10.777: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-4ztz" in namespace "volume-4912" to be "Succeeded or Failed"
Sep 17 07:27:10.874: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 97.078619ms
Sep 17 07:27:12.973: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195997775s
Sep 17 07:27:15.071: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294613503s
Sep 17 07:27:17.168: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391677577s
Sep 17 07:27:19.267: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490021583s
Sep 17 07:27:21.364: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.587325672s
Sep 17 07:27:23.464: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.686956883s
Sep 17 07:27:25.562: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.785156354s
Sep 17 07:27:27.661: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.883730332s
Sep 17 07:27:29.759: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.982019691s
Sep 17 07:27:31.857: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Pending", Reason="", readiness=false. Elapsed: 21.080224458s
Sep 17 07:27:33.955: INFO: Pod "exec-volume-test-dynamicpv-4ztz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.178031528s
STEP: Saw pod success
Sep 17 07:27:33.955: INFO: Pod "exec-volume-test-dynamicpv-4ztz" satisfied condition "Succeeded or Failed"
Sep 17 07:27:34.052: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod exec-volume-test-dynamicpv-4ztz container exec-container-dynamicpv-4ztz: <nil>
STEP: delete the pod
Sep 17 07:27:34.251: INFO: Waiting for pod exec-volume-test-dynamicpv-4ztz to disappear
Sep 17 07:27:34.348: INFO: Pod exec-volume-test-dynamicpv-4ztz no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-4ztz
Sep 17 07:27:34.348: INFO: Deleting pod "exec-volume-test-dynamicpv-4ztz" in namespace "volume-4912"
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":12,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Sep 17 07:27:29.589: INFO: PersistentVolumeClaim pvc-7hbbn found but phase is Pending instead of Bound.
Sep 17 07:27:31.686: INFO: PersistentVolumeClaim pvc-7hbbn found and phase=Bound (2.193567981s)
Sep 17 07:27:31.686: INFO: Waiting up to 3m0s for PersistentVolume local-f62s5 to have phase Bound
Sep 17 07:27:31.783: INFO: PersistentVolume local-f62s5 found and phase=Bound (96.38459ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dbhs
STEP: Creating a pod to test subpath
Sep 17 07:27:32.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dbhs" in namespace "provisioning-4749" to be "Succeeded or Failed"
Sep 17 07:27:32.177: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 97.009064ms
Sep 17 07:27:34.274: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193922563s
Sep 17 07:27:36.372: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292103937s
Sep 17 07:27:38.469: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389076965s
Sep 17 07:27:40.566: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.48628797s
Sep 17 07:27:42.671: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.591260103s
STEP: Saw pod success
Sep 17 07:27:42.671: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs" satisfied condition "Succeeded or Failed"
Sep 17 07:27:42.771: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-dbhs container test-container-subpath-preprovisionedpv-dbhs: <nil>
STEP: delete the pod
Sep 17 07:27:42.972: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dbhs to disappear
Sep 17 07:27:43.070: INFO: Pod pod-subpath-test-preprovisionedpv-dbhs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dbhs
Sep 17 07:27:43.070: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dbhs" in namespace "provisioning-4749"
STEP: Creating pod pod-subpath-test-preprovisionedpv-dbhs
STEP: Creating a pod to test subpath
Sep 17 07:27:43.265: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dbhs" in namespace "provisioning-4749" to be "Succeeded or Failed"
Sep 17 07:27:43.362: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 96.872826ms
Sep 17 07:27:45.459: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194611557s
Sep 17 07:27:47.559: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294239024s
Sep 17 07:27:49.656: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.391221381s
STEP: Saw pod success
Sep 17 07:27:49.656: INFO: Pod "pod-subpath-test-preprovisionedpv-dbhs" satisfied condition "Succeeded or Failed"
Sep 17 07:27:49.753: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-dbhs container test-container-subpath-preprovisionedpv-dbhs: <nil>
STEP: delete the pod
Sep 17 07:27:49.958: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dbhs to disappear
Sep 17 07:27:50.055: INFO: Pod pod-subpath-test-preprovisionedpv-dbhs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dbhs
Sep 17 07:27:50.055: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dbhs" in namespace "provisioning-4749"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:7.158 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:51.584: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":59,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:52.646: INFO: Only supported for providers [vsphere] (not aws)
... skipping 64 lines ...
• [SLOW TEST:70.693 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":20,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:52.983: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
Sep 17 07:27:45.057: INFO: PersistentVolumeClaim pvc-d8sjv found but phase is Pending instead of Bound.
Sep 17 07:27:47.160: INFO: PersistentVolumeClaim pvc-d8sjv found and phase=Bound (6.392988719s)
Sep 17 07:27:47.160: INFO: Waiting up to 3m0s for PersistentVolume local-kf5k8 to have phase Bound
Sep 17 07:27:47.259: INFO: PersistentVolume local-kf5k8 found and phase=Bound (99.084646ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fmzb
STEP: Creating a pod to test subpath
Sep 17 07:27:47.553: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fmzb" in namespace "provisioning-7981" to be "Succeeded or Failed"
Sep 17 07:27:47.649: INFO: Pod "pod-subpath-test-preprovisionedpv-fmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 96.2618ms
Sep 17 07:27:49.747: INFO: Pod "pod-subpath-test-preprovisionedpv-fmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1941238s
Sep 17 07:27:51.844: INFO: Pod "pod-subpath-test-preprovisionedpv-fmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291262922s
Sep 17 07:27:53.941: INFO: Pod "pod-subpath-test-preprovisionedpv-fmzb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.388688164s
STEP: Saw pod success
Sep 17 07:27:53.941: INFO: Pod "pod-subpath-test-preprovisionedpv-fmzb" satisfied condition "Succeeded or Failed"
Sep 17 07:27:54.041: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-fmzb container test-container-subpath-preprovisionedpv-fmzb: <nil>
STEP: delete the pod
Sep 17 07:27:54.242: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fmzb to disappear
Sep 17 07:27:54.338: INFO: Pod pod-subpath-test-preprovisionedpv-fmzb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fmzb
Sep 17 07:27:54.338: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fmzb" in namespace "provisioning-7981"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":64,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:27:55.794: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 55 lines ...
• [SLOW TEST:13.149 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":10,"skipped":34,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":109,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:27:37.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Sep 17 07:27:38.154: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 17 07:27:38.351: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-119" in namespace "provisioning-119" to be "Succeeded or Failed"
Sep 17 07:27:38.447: INFO: Pod "hostpath-symlink-prep-provisioning-119": Phase="Pending", Reason="", readiness=false. Elapsed: 96.12758ms
Sep 17 07:27:40.544: INFO: Pod "hostpath-symlink-prep-provisioning-119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19257s
Sep 17 07:27:42.640: INFO: Pod "hostpath-symlink-prep-provisioning-119": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288898087s
Sep 17 07:27:44.741: INFO: Pod "hostpath-symlink-prep-provisioning-119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.389817773s
STEP: Saw pod success
Sep 17 07:27:44.741: INFO: Pod "hostpath-symlink-prep-provisioning-119" satisfied condition "Succeeded or Failed"
Sep 17 07:27:44.741: INFO: Deleting pod "hostpath-symlink-prep-provisioning-119" in namespace "provisioning-119"
Sep 17 07:27:44.845: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-119" to be fully deleted
Sep 17 07:27:44.944: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9knx
STEP: Creating a pod to test subpath
Sep 17 07:27:45.042: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9knx" in namespace "provisioning-119" to be "Succeeded or Failed"
Sep 17 07:27:45.138: INFO: Pod "pod-subpath-test-inlinevolume-9knx": Phase="Pending", Reason="", readiness=false. Elapsed: 95.813641ms
Sep 17 07:27:47.237: INFO: Pod "pod-subpath-test-inlinevolume-9knx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194571016s
Sep 17 07:27:49.334: INFO: Pod "pod-subpath-test-inlinevolume-9knx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292078866s
Sep 17 07:27:51.431: INFO: Pod "pod-subpath-test-inlinevolume-9knx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389001222s
Sep 17 07:27:53.528: INFO: Pod "pod-subpath-test-inlinevolume-9knx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.485825053s
STEP: Saw pod success
Sep 17 07:27:53.528: INFO: Pod "pod-subpath-test-inlinevolume-9knx" satisfied condition "Succeeded or Failed"
Sep 17 07:27:53.626: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-9knx container test-container-subpath-inlinevolume-9knx: <nil>
STEP: delete the pod
Sep 17 07:27:53.847: INFO: Waiting for pod pod-subpath-test-inlinevolume-9knx to disappear
Sep 17 07:27:53.943: INFO: Pod pod-subpath-test-inlinevolume-9knx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9knx
Sep 17 07:27:53.943: INFO: Deleting pod "pod-subpath-test-inlinevolume-9knx" in namespace "provisioning-119"
STEP: Deleting pod
Sep 17 07:27:54.041: INFO: Deleting pod "pod-subpath-test-inlinevolume-9knx" in namespace "provisioning-119"
Sep 17 07:27:54.234: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-119" in namespace "provisioning-119" to be "Succeeded or Failed"
Sep 17 07:27:54.331: INFO: Pod "hostpath-symlink-prep-provisioning-119": Phase="Pending", Reason="", readiness=false. Elapsed: 96.075936ms
Sep 17 07:27:56.427: INFO: Pod "hostpath-symlink-prep-provisioning-119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19231661s
Sep 17 07:27:58.524: INFO: Pod "hostpath-symlink-prep-provisioning-119": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289610088s
Sep 17 07:28:00.634: INFO: Pod "hostpath-symlink-prep-provisioning-119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.399419638s
STEP: Saw pod success
Sep 17 07:28:00.634: INFO: Pod "hostpath-symlink-prep-provisioning-119" satisfied condition "Succeeded or Failed"
Sep 17 07:28:00.634: INFO: Deleting pod "hostpath-symlink-prep-provisioning-119" in namespace "provisioning-119"
Sep 17 07:28:00.735: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-119" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:28:00.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-119" for this suite.
... skipping 15 lines ...
Sep 17 07:27:52.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 17 07:27:53.282: INFO: Waiting up to 5m0s for pod "pod-00102802-9e1c-4df1-a237-03dcf11fa13f" in namespace "emptydir-4492" to be "Succeeded or Failed"
Sep 17 07:27:53.382: INFO: Pod "pod-00102802-9e1c-4df1-a237-03dcf11fa13f": Phase="Pending", Reason="", readiness=false. Elapsed: 99.197617ms
Sep 17 07:27:55.479: INFO: Pod "pod-00102802-9e1c-4df1-a237-03dcf11fa13f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196937559s
Sep 17 07:27:57.607: INFO: Pod "pod-00102802-9e1c-4df1-a237-03dcf11fa13f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32439595s
Sep 17 07:27:59.705: INFO: Pod "pod-00102802-9e1c-4df1-a237-03dcf11fa13f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422830749s
Sep 17 07:28:01.812: INFO: Pod "pod-00102802-9e1c-4df1-a237-03dcf11fa13f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.529413324s
STEP: Saw pod success
Sep 17 07:28:01.812: INFO: Pod "pod-00102802-9e1c-4df1-a237-03dcf11fa13f" satisfied condition "Succeeded or Failed"
Sep 17 07:28:01.909: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-00102802-9e1c-4df1-a237-03dcf11fa13f container test-container: <nil>
STEP: delete the pod
Sep 17 07:28:02.110: INFO: Waiting for pod pod-00102802-9e1c-4df1-a237-03dcf11fa13f to disappear
Sep 17 07:28:02.208: INFO: Pod pod-00102802-9e1c-4df1-a237-03dcf11fa13f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.716 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":79,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:02.449: INFO: Driver emptydir doesn't support ext4 -- skipping
... skipping 165 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":119,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:05.654: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
Sep 17 07:25:13.045: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9728 to register on node ip-172-20-53-192.eu-west-2.compute.internal
STEP: Creating pod
Sep 17 07:25:18.445: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Sep 17 07:25:18.557: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-zk2tb] to have phase Bound
Sep 17 07:25:18.653: INFO: PersistentVolumeClaim pvc-zk2tb found and phase=Bound (96.189494ms)
STEP: checking for CSIInlineVolumes feature
Sep 17 07:25:31.344: INFO: Error getting logs for pod inline-volume-hrhl9: the server rejected our request for an unknown reason (get pods inline-volume-hrhl9)
Sep 17 07:25:31.537: INFO: Deleting pod "inline-volume-hrhl9" in namespace "csi-mock-volumes-9728"
Sep 17 07:25:31.635: INFO: Wait up to 5m0s for pod "inline-volume-hrhl9" to be fully deleted
STEP: Deleting the previously created pod
Sep 17 07:27:37.828: INFO: Deleting pod "pvc-volume-tester-w7d9d" in namespace "csi-mock-volumes-9728"
Sep 17 07:27:37.925: INFO: Wait up to 5m0s for pod "pvc-volume-tester-w7d9d" to be fully deleted
STEP: Checking CSI driver logs
Sep 17 07:27:44.217: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 08f5ceaa-d48c-4579-91d4-13a80b0ff28c
Sep 17 07:27:44.217: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Sep 17 07:27:44.217: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Sep 17 07:27:44.217: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-w7d9d
Sep 17 07:27:44.217: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-9728
Sep 17 07:27:44.217: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/08f5ceaa-d48c-4579-91d4-13a80b0ff28c/volumes/kubernetes.io~csi/pvc-069201de-0a3b-4b16-aebd-6c024c7c7212/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-w7d9d
Sep 17 07:27:44.217: INFO: Deleting pod "pvc-volume-tester-w7d9d" in namespace "csi-mock-volumes-9728"
STEP: Deleting claim pvc-zk2tb
Sep 17 07:27:44.515: INFO: Waiting up to 2m0s for PersistentVolume pvc-069201de-0a3b-4b16-aebd-6c024c7c7212 to get deleted
Sep 17 07:27:44.611: INFO: PersistentVolume pvc-069201de-0a3b-4b16-aebd-6c024c7c7212 was removed
STEP: Deleting storageclass csi-mock-volumes-9728-scl7kjm
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":4,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:46.726 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:185
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":14,"skipped":104,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
Sep 17 07:28:15.522: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.688 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 127 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should preserve attachment policy when no CSIDriver present
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":6,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:16.713: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 160 lines ...
• [SLOW TEST:129.391 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:278
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":7,"skipped":58,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:28:18.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:28:20.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4898" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":8,"skipped":58,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:20.257: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
• [SLOW TEST:16.822 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":10,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:21.546: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 203 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":10,"skipped":116,"failed":0}
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:28:17.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Sep 17 07:28:18.430: INFO: Waiting up to 5m0s for pod "var-expansion-726f22a9-56ef-43b8-bceb-15bce5f253f9" in namespace "var-expansion-7880" to be "Succeeded or Failed"
Sep 17 07:28:18.526: INFO: Pod "var-expansion-726f22a9-56ef-43b8-bceb-15bce5f253f9": Phase="Pending", Reason="", readiness=false. Elapsed: 96.255341ms
Sep 17 07:28:20.623: INFO: Pod "var-expansion-726f22a9-56ef-43b8-bceb-15bce5f253f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19281413s
Sep 17 07:28:22.721: INFO: Pod "var-expansion-726f22a9-56ef-43b8-bceb-15bce5f253f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.291058696s
STEP: Saw pod success
Sep 17 07:28:22.721: INFO: Pod "var-expansion-726f22a9-56ef-43b8-bceb-15bce5f253f9" satisfied condition "Succeeded or Failed"
Sep 17 07:28:22.818: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod var-expansion-726f22a9-56ef-43b8-bceb-15bce5f253f9 container dapi-container: <nil>
STEP: delete the pod
Sep 17 07:28:23.016: INFO: Waiting for pod var-expansion-726f22a9-56ef-43b8-bceb-15bce5f253f9 to disappear
Sep 17 07:28:23.112: INFO: Pod var-expansion-726f22a9-56ef-43b8-bceb-15bce5f253f9 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.470 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":116,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:23.321: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 156 lines ...
Sep 17 07:27:34.060: INFO: PersistentVolumeClaim csi-hostpathdqszn found but phase is Pending instead of Bound.
Sep 17 07:27:36.157: INFO: PersistentVolumeClaim csi-hostpathdqszn found but phase is Pending instead of Bound.
Sep 17 07:27:38.254: INFO: PersistentVolumeClaim csi-hostpathdqszn found but phase is Pending instead of Bound.
Sep 17 07:27:40.350: INFO: PersistentVolumeClaim csi-hostpathdqszn found and phase=Bound (6.386277717s)
STEP: Creating pod pod-subpath-test-dynamicpv-4ls2
STEP: Creating a pod to test subpath
Sep 17 07:27:40.640: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-4ls2" in namespace "provisioning-607" to be "Succeeded or Failed"
Sep 17 07:27:40.737: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 96.314051ms
Sep 17 07:27:42.834: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193187209s
Sep 17 07:27:44.940: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29952679s
Sep 17 07:27:47.037: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396525136s
Sep 17 07:27:49.136: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495779742s
Sep 17 07:27:51.234: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.593233336s
Sep 17 07:27:53.331: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.690092329s
Sep 17 07:27:55.428: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.787853462s
Sep 17 07:27:57.538: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.898036076s
STEP: Saw pod success
Sep 17 07:27:57.539: INFO: Pod "pod-subpath-test-dynamicpv-4ls2" satisfied condition "Succeeded or Failed"
Sep 17 07:27:57.652: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-4ls2 container test-container-subpath-dynamicpv-4ls2: <nil>
STEP: delete the pod
Sep 17 07:27:57.864: INFO: Waiting for pod pod-subpath-test-dynamicpv-4ls2 to disappear
Sep 17 07:27:57.967: INFO: Pod pod-subpath-test-dynamicpv-4ls2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-4ls2
Sep 17 07:27:57.967: INFO: Deleting pod "pod-subpath-test-dynamicpv-4ls2" in namespace "provisioning-607"
STEP: Creating pod pod-subpath-test-dynamicpv-4ls2
STEP: Creating a pod to test subpath
Sep 17 07:27:58.162: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-4ls2" in namespace "provisioning-607" to be "Succeeded or Failed"
Sep 17 07:27:58.258: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 96.410729ms
Sep 17 07:28:00.356: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194143753s
Sep 17 07:28:02.453: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291629246s
Sep 17 07:28:04.550: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388686492s
Sep 17 07:28:06.651: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489513538s
Sep 17 07:28:08.749: INFO: Pod "pod-subpath-test-dynamicpv-4ls2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.586795397s
STEP: Saw pod success
Sep 17 07:28:08.749: INFO: Pod "pod-subpath-test-dynamicpv-4ls2" satisfied condition "Succeeded or Failed"
Sep 17 07:28:08.845: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-4ls2 container test-container-subpath-dynamicpv-4ls2: <nil>
STEP: delete the pod
Sep 17 07:28:09.050: INFO: Waiting for pod pod-subpath-test-dynamicpv-4ls2 to disappear
Sep 17 07:28:09.147: INFO: Pod pod-subpath-test-dynamicpv-4ls2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-4ls2
Sep 17 07:28:09.147: INFO: Deleting pod "pod-subpath-test-dynamicpv-4ls2" in namespace "provisioning-607"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":11,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:26.097: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47
    should be mountable
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":13,"skipped":98,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:26.988: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
Sep 17 07:28:15.227: INFO: PersistentVolumeClaim pvc-b9qmt found but phase is Pending instead of Bound.
Sep 17 07:28:17.326: INFO: PersistentVolumeClaim pvc-b9qmt found and phase=Bound (14.778720251s)
Sep 17 07:28:17.326: INFO: Waiting up to 3m0s for PersistentVolume local-fhk79 to have phase Bound
Sep 17 07:28:17.423: INFO: PersistentVolume local-fhk79 found and phase=Bound (96.061139ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-sctd
STEP: Creating a pod to test exec-volume-test
Sep 17 07:28:17.724: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-sctd" in namespace "volume-1095" to be "Succeeded or Failed"
Sep 17 07:28:17.822: INFO: Pod "exec-volume-test-preprovisionedpv-sctd": Phase="Pending", Reason="", readiness=false. Elapsed: 98.645308ms
Sep 17 07:28:19.920: INFO: Pod "exec-volume-test-preprovisionedpv-sctd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196277699s
Sep 17 07:28:22.028: INFO: Pod "exec-volume-test-preprovisionedpv-sctd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304411566s
Sep 17 07:28:24.126: INFO: Pod "exec-volume-test-preprovisionedpv-sctd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.402180242s
STEP: Saw pod success
Sep 17 07:28:24.126: INFO: Pod "exec-volume-test-preprovisionedpv-sctd" satisfied condition "Succeeded or Failed"
Sep 17 07:28:24.223: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-sctd container exec-container-preprovisionedpv-sctd: <nil>
STEP: delete the pod
Sep 17 07:28:24.426: INFO: Waiting for pod exec-volume-test-preprovisionedpv-sctd to disappear
Sep 17 07:28:24.522: INFO: Pod exec-volume-test-preprovisionedpv-sctd no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-sctd
Sep 17 07:28:24.523: INFO: Deleting pod "exec-volume-test-preprovisionedpv-sctd" in namespace "volume-1095"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":68,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 71 lines ...
• [SLOW TEST:34.513 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:27.601: INFO: Only supported for providers [openstack] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 17 07:28:20.880: INFO: Waiting up to 5m0s for pod "pod-675b50ee-43bd-445d-9bff-64346b9336b4" in namespace "emptydir-8790" to be "Succeeded or Failed"
Sep 17 07:28:20.977: INFO: Pod "pod-675b50ee-43bd-445d-9bff-64346b9336b4": Phase="Pending", Reason="", readiness=false. Elapsed: 97.245471ms
Sep 17 07:28:23.075: INFO: Pod "pod-675b50ee-43bd-445d-9bff-64346b9336b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195010325s
Sep 17 07:28:25.176: INFO: Pod "pod-675b50ee-43bd-445d-9bff-64346b9336b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296028674s
Sep 17 07:28:27.275: INFO: Pod "pod-675b50ee-43bd-445d-9bff-64346b9336b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.395304018s
STEP: Saw pod success
Sep 17 07:28:27.275: INFO: Pod "pod-675b50ee-43bd-445d-9bff-64346b9336b4" satisfied condition "Succeeded or Failed"
Sep 17 07:28:27.372: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-675b50ee-43bd-445d-9bff-64346b9336b4 container test-container: <nil>
STEP: delete the pod
Sep 17 07:28:27.576: INFO: Waiting for pod pod-675b50ee-43bd-445d-9bff-64346b9336b4 to disappear
Sep 17 07:28:27.673: INFO: Pod pod-675b50ee-43bd-445d-9bff-64346b9336b4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:27.880: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 211 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":8,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:29.070: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
• [SLOW TEST:23.557 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":13,"skipped":127,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:23.089 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:29.498: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-9a949a8a-d1fe-4024-8ec3-53eb9a6a01d9
STEP: Creating a pod to test consume secrets
Sep 17 07:28:26.792: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60dd5848-089a-4278-9fd3-3dabb0d49f80" in namespace "projected-6433" to be "Succeeded or Failed"
Sep 17 07:28:26.900: INFO: Pod "pod-projected-secrets-60dd5848-089a-4278-9fd3-3dabb0d49f80": Phase="Pending", Reason="", readiness=false. Elapsed: 107.400679ms
Sep 17 07:28:28.998: INFO: Pod "pod-projected-secrets-60dd5848-089a-4278-9fd3-3dabb0d49f80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.205513279s
STEP: Saw pod success
Sep 17 07:28:28.998: INFO: Pod "pod-projected-secrets-60dd5848-089a-4278-9fd3-3dabb0d49f80" satisfied condition "Succeeded or Failed"
Sep 17 07:28:29.094: INFO: Trying to get logs from node ip-172-20-53-192.eu-west-2.compute.internal pod pod-projected-secrets-60dd5848-089a-4278-9fd3-3dabb0d49f80 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 17 07:28:29.294: INFO: Waiting for pod pod-projected-secrets-60dd5848-089a-4278-9fd3-3dabb0d49f80 to disappear
Sep 17 07:28:29.390: INFO: Pod pod-projected-secrets-60dd5848-089a-4278-9fd3-3dabb0d49f80 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:28:29.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6433" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":65,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:29.642: INFO: Only supported for providers [gce gke] (not aws)
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:28:29.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-4610" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":9,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:29.754: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:28:29.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-487" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":14,"skipped":128,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:30.064: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":9,"skipped":54,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:28:23.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 07:28:23.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8" in namespace "projected-4906" to be "Succeeded or Failed"
Sep 17 07:28:23.864: INFO: Pod "downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8": Phase="Pending", Reason="", readiness=false. Elapsed: 97.646601ms
Sep 17 07:28:25.962: INFO: Pod "downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195304067s
Sep 17 07:28:28.060: INFO: Pod "downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293117811s
Sep 17 07:28:30.157: INFO: Pod "downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.390500032s
STEP: Saw pod success
Sep 17 07:28:30.157: INFO: Pod "downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8" satisfied condition "Succeeded or Failed"
Sep 17 07:28:30.256: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8 container client-container: <nil>
STEP: delete the pod
Sep 17 07:28:30.456: INFO: Waiting for pod downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8 to disappear
Sep 17 07:28:30.552: INFO: Pod downwardapi-volume-d24f2d1c-fc85-40a5-b66b-83088fc267d8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.571 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:30.766: INFO: Only supported for providers [gce gke] (not aws)
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Sep 17 07:27:59.045: INFO: PersistentVolumeClaim pvc-h57lp found but phase is Pending instead of Bound.
Sep 17 07:28:01.142: INFO: PersistentVolumeClaim pvc-h57lp found and phase=Bound (4.292591534s)
Sep 17 07:28:01.142: INFO: Waiting up to 3m0s for PersistentVolume local-cmgtv to have phase Bound
Sep 17 07:28:01.238: INFO: PersistentVolume local-cmgtv found and phase=Bound (96.236188ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8xl7
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 07:28:01.531: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8xl7" in namespace "provisioning-4465" to be "Succeeded or Failed"
Sep 17 07:28:01.643: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Pending", Reason="", readiness=false. Elapsed: 112.784264ms
Sep 17 07:28:03.740: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209445889s
Sep 17 07:28:05.839: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308750394s
Sep 17 07:28:07.936: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405113673s
Sep 17 07:28:10.033: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Running", Reason="", readiness=true. Elapsed: 8.502061192s
Sep 17 07:28:12.129: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Running", Reason="", readiness=true. Elapsed: 10.598465355s
... skipping 4 lines ...
Sep 17 07:28:22.622: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Running", Reason="", readiness=true. Elapsed: 21.091509303s
Sep 17 07:28:24.721: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Running", Reason="", readiness=true. Elapsed: 23.189936519s
Sep 17 07:28:26.818: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Running", Reason="", readiness=true. Elapsed: 25.287036974s
Sep 17 07:28:28.915: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Running", Reason="", readiness=true. Elapsed: 27.384509582s
Sep 17 07:28:31.015: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.48409565s
STEP: Saw pod success
Sep 17 07:28:31.015: INFO: Pod "pod-subpath-test-preprovisionedpv-8xl7" satisfied condition "Succeeded or Failed"
Sep 17 07:28:31.111: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-8xl7 container test-container-subpath-preprovisionedpv-8xl7: <nil>
STEP: delete the pod
Sep 17 07:28:31.311: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8xl7 to disappear
Sep 17 07:28:31.408: INFO: Pod pod-subpath-test-preprovisionedpv-8xl7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8xl7
Sep 17 07:28:31.408: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8xl7" in namespace "provisioning-4465"
... skipping 29 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Sep 17 07:28:29.688: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-4c7dff12-b834-4bbf-b103-a20f6eb01057" in namespace "security-context-test-2352" to be "Succeeded or Failed"
Sep 17 07:28:29.786: INFO: Pod "busybox-readonly-true-4c7dff12-b834-4bbf-b103-a20f6eb01057": Phase="Pending", Reason="", readiness=false. Elapsed: 97.748808ms
Sep 17 07:28:31.884: INFO: Pod "busybox-readonly-true-4c7dff12-b834-4bbf-b103-a20f6eb01057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196656396s
Sep 17 07:28:33.982: INFO: Pod "busybox-readonly-true-4c7dff12-b834-4bbf-b103-a20f6eb01057": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294716168s
Sep 17 07:28:36.081: INFO: Pod "busybox-readonly-true-4c7dff12-b834-4bbf-b103-a20f6eb01057": Phase="Failed", Reason="", readiness=false. Elapsed: 6.393145922s
Sep 17 07:28:36.081: INFO: Pod "busybox-readonly-true-4c7dff12-b834-4bbf-b103-a20f6eb01057" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 07:28:36.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2352" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:36.313: INFO: Only supported for providers [vsphere] (not aws)
... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":13,"skipped":104,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:37.108: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:28:33.824: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep 17 07:28:34.306: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 17 07:28:34.404: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9lh8
STEP: Creating a pod to test subpath
Sep 17 07:28:34.503: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9lh8" in namespace "provisioning-1435" to be "Succeeded or Failed"
Sep 17 07:28:34.600: INFO: Pod "pod-subpath-test-inlinevolume-9lh8": Phase="Pending", Reason="", readiness=false. Elapsed: 96.638687ms
Sep 17 07:28:36.698: INFO: Pod "pod-subpath-test-inlinevolume-9lh8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195121778s
STEP: Saw pod success
Sep 17 07:28:36.698: INFO: Pod "pod-subpath-test-inlinevolume-9lh8" satisfied condition "Succeeded or Failed"
Sep 17 07:28:36.794: INFO: Trying to get logs from node ip-172-20-33-78.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-9lh8 container test-container-volume-inlinevolume-9lh8: <nil>
STEP: delete the pod
Sep 17 07:28:36.995: INFO: Waiting for pod pod-subpath-test-inlinevolume-9lh8 to disappear
Sep 17 07:28:37.091: INFO: Pod pod-subpath-test-inlinevolume-9lh8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9lh8
Sep 17 07:28:37.091: INFO: Deleting pod "pod-subpath-test-inlinevolume-9lh8" in namespace "provisioning-1435"
... skipping 123 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform rolling updates and roll backs of template modifications with PVCs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:288
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:37.600: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 07:28:30.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d" in namespace "downward-api-8522" to be "Succeeded or Failed"
Sep 17 07:28:30.434: INFO: Pod "downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d": Phase="Pending", Reason="", readiness=false. Elapsed: 95.580669ms
Sep 17 07:28:32.530: INFO: Pod "downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1923451s
Sep 17 07:28:34.627: INFO: Pod "downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288530013s
Sep 17 07:28:36.723: INFO: Pod "downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.384598435s
STEP: Saw pod success
Sep 17 07:28:36.723: INFO: Pod "downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d" satisfied condition "Succeeded or Failed"
Sep 17 07:28:36.818: INFO: Trying to get logs from node ip-172-20-51-79.eu-west-2.compute.internal pod downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d container client-container: <nil>
STEP: delete the pod
Sep 17 07:28:37.421: INFO: Waiting for pod downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d to disappear
Sep 17 07:28:37.516: INFO: Pod downwardapi-volume-3a872215-2f52-42a4-a27b-41452d5c418d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.952 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:37.737: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 151 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1316
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":11,"skipped":57,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":6,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:40.407: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 143 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":13,"skipped":109,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:28:01.047: INFO: >>> kubeConfig: /root/.kube/config
... skipping 20 lines ...
Sep 17 07:28:28.229: INFO: PersistentVolumeClaim pvc-w76ls found but phase is Pending instead of Bound.
Sep 17 07:28:30.327: INFO: PersistentVolumeClaim pvc-w76ls found and phase=Bound (12.714205769s)
Sep 17 07:28:30.328: INFO: Waiting up to 3m0s for PersistentVolume local-n6fxr to have phase Bound
Sep 17 07:28:30.424: INFO: PersistentVolume local-n6fxr found and phase=Bound (96.477572ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j88n
STEP: Creating a pod to test subpath
Sep 17 07:28:30.716: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j88n" in namespace "provisioning-2890" to be "Succeeded or Failed"
Sep 17 07:28:30.812: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n": Phase="Pending", Reason="", readiness=false. Elapsed: 96.268018ms
Sep 17 07:28:32.910: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193523762s
Sep 17 07:28:35.007: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291299504s
Sep 17 07:28:37.104: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.387583199s
STEP: Saw pod success
Sep 17 07:28:37.104: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n" satisfied condition "Succeeded or Failed"
Sep 17 07:28:37.206: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-j88n container test-container-subpath-preprovisionedpv-j88n: <nil>
STEP: delete the pod
Sep 17 07:28:37.414: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j88n to disappear
Sep 17 07:28:37.510: INFO: Pod pod-subpath-test-preprovisionedpv-j88n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j88n
Sep 17 07:28:37.510: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j88n" in namespace "provisioning-2890"
STEP: Creating pod pod-subpath-test-preprovisionedpv-j88n
STEP: Creating a pod to test subpath
Sep 17 07:28:37.707: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j88n" in namespace "provisioning-2890" to be "Succeeded or Failed"
Sep 17 07:28:37.803: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n": Phase="Pending", Reason="", readiness=false. Elapsed: 96.168728ms
Sep 17 07:28:39.901: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193946042s
Sep 17 07:28:41.998: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.290921532s
STEP: Saw pod success
Sep 17 07:28:41.998: INFO: Pod "pod-subpath-test-preprovisionedpv-j88n" satisfied condition "Succeeded or Failed"
Sep 17 07:28:42.095: INFO: Trying to get logs from node ip-172-20-60-186.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-j88n container test-container-subpath-preprovisionedpv-j88n: <nil>
STEP: delete the pod
Sep 17 07:28:42.470: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j88n to disappear
Sep 17 07:28:42.567: INFO: Pod pod-subpath-test-preprovisionedpv-j88n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j88n
Sep 17 07:28:42.567: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j88n" in namespace "provisioning-2890"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":14,"skipped":109,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 07:28:37.489: INFO: >>> kubeConfig: /root/.kube/config
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":10,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 17 07:28:45.992: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40851 lines ...






al-55649fd747 to 4\"\nI0917 07:35:38.273601       1 event.go:291] \"Event occurred\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-kpwmk\"\nI0917 07:35:38.284288       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"apply-7520/deployment-shared-map-item-removal\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment-shared-map-item-removal\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0917 07:35:38.343960       1 namespace_controller.go:162] deletion of namespace apply-8919 failed: unexpected items still remain in namespace: apply-8919 for gvr: /v1, Resource=pods\nI0917 07:35:38.353906       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6332-1053/csi-hostpathplugin-5784b44b44\" objectUID=9585f076-9629-47da-91f1-6e7d2af8405e kind=\"ControllerRevision\" virtual=false\nI0917 07:35:38.354286       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-6332-1053/csi-hostpathplugin\nI0917 07:35:38.354377       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6332-1053/csi-hostpathplugin-0\" objectUID=b5f99ea1-c062-4210-9218-397ebfef3a1e kind=\"Pod\" virtual=false\nI0917 07:35:38.356038       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6332-1053/csi-hostpathplugin-5784b44b44\" objectUID=9585f076-9629-47da-91f1-6e7d2af8405e kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:35:38.356924       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6332-1053/csi-hostpathplugin-0\" objectUID=b5f99ea1-c062-4210-9218-397ebfef3a1e kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:38.710866       1 namespace_controller.go:185] Namespace has been deleted ephemeral-6332\nI0917 07:35:38.870650       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747\" objectUID=a7811c98-bce9-4847-9895-f69f67e3eecb kind=\"ReplicaSet\" virtual=false\nI0917 07:35:38.871030       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"apply-7520/deployment-shared-map-item-removal\"\nI0917 07:35:38.872561       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747\" objectUID=a7811c98-bce9-4847-9895-f69f67e3eecb kind=\"ReplicaSet\" propagationPolicy=Background\nI0917 07:35:38.875752       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747-tq92l\" objectUID=405454f9-f9a8-4f3c-a53c-900404fd7fc7 kind=\"Pod\" virtual=false\nI0917 07:35:38.876085       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747-zhwn7\" objectUID=8222e3d9-7d87-482b-aa90-bc435e1101bb kind=\"Pod\" virtual=false\nI0917 07:35:38.876285       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747-v65dr\" objectUID=56fcaae1-8e8a-4a45-8cc8-3d3d2c0d037d kind=\"Pod\" virtual=false\nI0917 07:35:38.876510       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747-kpwmk\" objectUID=2151d7bd-1112-47fc-b490-339cbe257a84 kind=\"Pod\" virtual=false\nI0917 07:35:38.880365       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747-kpwmk\" objectUID=2151d7bd-1112-47fc-b490-339cbe257a84 kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:38.881785       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747-v65dr\" objectUID=56fcaae1-8e8a-4a45-8cc8-3d3d2c0d037d kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:38.881980       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747-zhwn7\" objectUID=8222e3d9-7d87-482b-aa90-bc435e1101bb kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:38.884195       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7520/deployment-shared-map-item-removal-55649fd747-tq92l\" objectUID=405454f9-f9a8-4f3c-a53c-900404fd7fc7 kind=\"Pod\" propagationPolicy=Background\nE0917 07:35:39.032901       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:35:39.096509       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-6072/pvc-42tzt\"\nI0917 07:35:39.101210       1 pv_controller.go:640] volume \"pvc-11b0de35-2d23-4066-a01e-d2e209b49fc6\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:35:39.106894       1 pv_controller.go:879] volume \"pvc-11b0de35-2d23-4066-a01e-d2e209b49fc6\" entered phase \"Released\"\nI0917 07:35:39.109524       1 pv_controller.go:1340] isVolumeReleased[pvc-11b0de35-2d23-4066-a01e-d2e209b49fc6]: volume is released\nI0917 07:35:39.124274       1 namespace_controller.go:185] Namespace has been deleted container-probe-5594\nI0917 07:35:39.159035       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"dns-9922/dns-test-service\" err=\"Operation cannot be fulfilled on endpoints \\\"dns-test-service\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:35:39.159478       1 event.go:291] \"Event occurred\" object=\"dns-9922/dns-test-service\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint dns-9922/dns-test-service: Operation cannot be fulfilled on endpoints \\\"dns-test-service\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:35:39.244037       1 garbagecollector.go:471] \"Processing object\" object=\"dns-9922/test-service-2-8v4kx\" objectUID=ffeeee4c-7c8a-4c94-ba0f-68fe7ab0fea9 kind=\"EndpointSlice\" virtual=false\nI0917 07:35:39.246812       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-9922/test-service-2-8v4kx\" objectUID=ffeeee4c-7c8a-4c94-ba0f-68fe7ab0fea9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:35:39.284993       1 namespace_controller.go:185] Namespace has been deleted emptydir-2852\nI0917 07:35:39.348047       1 garbagecollector.go:471] \"Processing object\" object=\"dns-9922/dns-test-service-gbhhd\" objectUID=20db620e-b9ca-4b15-94da-4e27e3e8d043 kind=\"EndpointSlice\" virtual=false\nI0917 07:35:39.350370       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-9922/dns-test-service-gbhhd\" objectUID=20db620e-b9ca-4b15-94da-4e27e3e8d043 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:35:39.691914       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-11b0de35-2d23-4066-a01e-d2e209b49fc6\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6072^4\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:35:39.699931       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-11b0de35-2d23-4066-a01e-d2e209b49fc6\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6072^4\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:35:39.715471       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-8fc2f03c-f343-49ca-9377-6f501b281cca\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5104^ba3a7b56-1789-11ec-8103-3eab9e6bf0f7\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:35:39.722301       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-8fc2f03c-f343-49ca-9377-6f501b281cca\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5104^ba3a7b56-1789-11ec-8103-3eab9e6bf0f7\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nE0917 07:35:40.121346       1 pv_protection_controller.go:118] PV pvc-11b0de35-2d23-4066-a01e-d2e209b49fc6 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-11b0de35-2d23-4066-a01e-d2e209b49fc6\": the object has been modified; please apply your changes to the latest version and try again\nI0917 07:35:40.124808       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-6072/pvc-42tzt\" was already processed\nI0917 07:35:40.250417       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-11b0de35-2d23-4066-a01e-d2e209b49fc6\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6072^4\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:35:40.279987       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-8fc2f03c-f343-49ca-9377-6f501b281cca\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5104^ba3a7b56-1789-11ec-8103-3eab9e6bf0f7\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:35:40.297449       1 namespace_controller.go:185] Namespace has been deleted job-2734\nI0917 07:35:40.566350       1 event.go:291] \"Event occurred\" object=\"provisioning-5320/pvc-7677p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:40.568631       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9475\nI0917 07:35:40.602951       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5603-9821/csi-mockplugin-5d945f44d6\" objectUID=b3d1aa49-1145-4fa1-b8f3-dda4977224fb kind=\"ControllerRevision\" virtual=false\nI0917 07:35:40.603036       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5603-9821/csi-mockplugin\nI0917 07:35:40.603125       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5603-9821/csi-mockplugin-0\" objectUID=0d7e3a64-61c7-48ea-8c51-dc09ac18755b kind=\"Pod\" virtual=false\nI0917 07:35:40.604776       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5603-9821/csi-mockplugin-0\" objectUID=0d7e3a64-61c7-48ea-8c51-dc09ac18755b kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:40.605184       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5603-9821/csi-mockplugin-5d945f44d6\" objectUID=b3d1aa49-1145-4fa1-b8f3-dda4977224fb kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:35:40.650219       1 namespace_controller.go:185] Namespace has been deleted projected-6383\nI0917 07:35:40.675455       1 event.go:291] \"Event occurred\" object=\"provisioning-5320/pvc-7677p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:40.889740       1 pv_controller.go:879] volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" entered phase \"Bound\"\nI0917 07:35:40.889959       1 pv_controller.go:982] volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" bound to claim \"provisioning-1302/csi-hostpathmrgrr\"\nI0917 07:35:40.899229       1 pv_controller.go:823] claim \"provisioning-1302/csi-hostpathmrgrr\" entered phase \"Bound\"\nI0917 07:35:41.093925       1 namespace_controller.go:185] Namespace has been deleted certificates-690\nI0917 07:35:41.509549       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-4300/frontend-685fc574d5\" need=3 creating=3\nI0917 07:35:41.509871       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/frontend\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set frontend-685fc574d5 to 3\"\nI0917 07:35:41.518331       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-4300/frontend\" err=\"Operation cannot be fulfilled on deployments.apps \\\"frontend\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:35:41.520530       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/frontend-685fc574d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-685fc574d5-dvxmz\"\nI0917 07:35:41.531766       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/frontend-685fc574d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-685fc574d5-r6ckm\"\nI0917 07:35:41.532142       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/frontend-685fc574d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-685fc574d5-c7rr8\"\nI0917 07:35:41.771439       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5603\nI0917 07:35:41.890849       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-6776\nI0917 07:35:42.072196       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/agnhost-primary\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set agnhost-primary-5db8ddd565 to 1\"\nI0917 07:35:42.072518       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-4300/agnhost-primary-5db8ddd565\" need=1 creating=1\nI0917 07:35:42.077155       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/agnhost-primary-5db8ddd565\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-5db8ddd565-hkgsc\"\nI0917 07:35:42.092862       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-4300/agnhost-primary\" err=\"Operation cannot be fulfilled on deployments.apps \\\"agnhost-primary\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0917 07:35:42.335146       1 tokens_controller.go:262] error synchronizing serviceaccount projected-5968/default: secrets \"default-token-4kbw2\" is forbidden: unable to create new content in namespace projected-5968 because it is being terminated\nE0917 07:35:42.420005       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0917 07:35:42.523457       1 tokens_controller.go:262] error synchronizing serviceaccount secret-namespace-8399/default: secrets \"default-token-95w9l\" is forbidden: unable to create new content in namespace secret-namespace-8399 because it is being terminated\nI0917 07:35:42.619373       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-4300/agnhost-replica-6bcf79b489\" need=2 creating=2\nI0917 07:35:42.620197       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/agnhost-replica\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set agnhost-replica-6bcf79b489 to 2\"\nI0917 07:35:42.624964       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/agnhost-replica-6bcf79b489\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-replica-6bcf79b489-hq54r\"\nI0917 07:35:42.629823       1 event.go:291] \"Event occurred\" object=\"kubectl-4300/agnhost-replica-6bcf79b489\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-replica-6bcf79b489-qpnp9\"\nI0917 07:35:42.642413       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-4300/agnhost-replica\" err=\"Operation cannot be fulfilled on deployments.apps \\\"agnhost-replica\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:35:42.739843       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1302^dbe49d1b-1789-11ec-a2e2-a6341466e799\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:35:43.164768       1 tokens_controller.go:262] error synchronizing serviceaccount volume-6059/default: secrets \"default-token-lkqlj\" is forbidden: unable to create new content in namespace volume-6059 because it is being terminated\nI0917 07:35:43.253663       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1651-227/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0917 07:35:43.275429       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1302^dbe49d1b-1789-11ec-a2e2-a6341466e799\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:35:43.275904       1 event.go:291] \"Event occurred\" object=\"provisioning-1302/pod-subpath-test-dynamicpv-5lfb\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3031d0cc-7382-48b2-890e-48154fef2540\\\" \"\nI0917 07:35:43.457380       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE0917 07:35:43.761971       1 tokens_controller.go:262] error synchronizing serviceaccount apply-7520/default: secrets \"default-token-wg4m8\" is forbidden: unable to create new content in namespace apply-7520 because it is being terminated\nI0917 07:35:43.836009       1 event.go:291] \"Event occurred\" object=\"volume-expand-3273/awshnlvt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:43.922336       1 stateful_set_control.go:521] StatefulSet statefulset-3578/ss terminating Pod ss-0 for scale down\nI0917 07:35:43.927060       1 event.go:291] \"Event occurred\" object=\"statefulset-3578/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0917 07:35:43.981383       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9642-9032\nE0917 07:35:43.983437       1 tokens_controller.go:262] error synchronizing serviceaccount volume-6787/default: secrets \"default-token-6m9qr\" is forbidden: unable to create new content in namespace volume-6787 because it is being terminated\nI0917 07:35:44.040146       1 event.go:291] \"Event occurred\" object=\"volume-expand-3273/awshnlvt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:44.065926       1 pv_controller.go:879] volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" entered phase \"Bound\"\nI0917 07:35:44.066128       1 pv_controller.go:982] volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" bound to claim \"provisioning-5320/pvc-7677p\"\nI0917 07:35:44.073702       1 pv_controller.go:823] claim \"provisioning-5320/pvc-7677p\" entered phase \"Bound\"\nI0917 07:35:44.694966       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:35:45.204417       1 event.go:291] \"Event occurred\" object=\"volume-expand-1881/awszpttk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:45.204518       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-6/pvc-blrdj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:45.204549       1 event.go:291] \"Event occurred\" object=\"volume-expand-3219/awsb6rc9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:45.204618       1 event.go:291] \"Event occurred\" object=\"volume-expand-3273/awshnlvt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:45.497340       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-3890/pvc-xh6dd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-3890\\\" or manually created by system administrator\"\nI0917 07:35:45.512937       1 pv_controller.go:879] volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" entered phase \"Bound\"\nI0917 07:35:45.512971       1 pv_controller.go:982] volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" bound to claim \"csi-mock-volumes-3890/pvc-xh6dd\"\nI0917 07:35:45.518262       1 pv_controller.go:823] claim \"csi-mock-volumes-3890/pvc-xh6dd\" entered phase \"Bound\"\nI0917 07:35:45.581780       1 event.go:291] \"Event occurred\" object=\"topology-1325/pvc-xh8sk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:45.582121       1 event.go:291] \"Event occurred\" object=\"topology-1325/pvc-xh8sk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:45.901873       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3890^4\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nE0917 07:35:45.916121       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-5603-9821/default: secrets \"default-token-hhhmh\" is forbidden: unable to create new content in namespace csi-mock-volumes-5603-9821 because it is being terminated\nE0917 07:35:45.935848       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-8696/pvc-7kgb6: storageclass.storage.k8s.io \"volume-8696\" not found\nI0917 07:35:45.936046       1 event.go:291] \"Event occurred\" object=\"volume-8696/pvc-7kgb6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8696\\\" not found\"\nI0917 07:35:46.035801       1 pv_controller.go:879] volume \"local-wlppz\" entered phase \"Available\"\nI0917 07:35:46.327984       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1615-1786\nI0917 07:35:46.432216       1 pv_controller.go:879] volume \"local-pvsk4cv\" entered phase \"Available\"\nI0917 07:35:46.467757       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3890^4\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:35:46.468137       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-3890/pvc-volume-tester-ps8cm\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\\\" \"\nI0917 07:35:46.536511       1 pv_controller.go:930] claim \"persistent-local-volumes-test-213/pvc-5fbpz\" bound to volume \"local-pvsk4cv\"\nI0917 07:35:46.547235       1 pv_controller.go:879] volume \"local-pvsk4cv\" entered phase \"Bound\"\nI0917 07:35:46.547268       1 pv_controller.go:982] volume \"local-pvsk4cv\" bound to claim \"persistent-local-volumes-test-213/pvc-5fbpz\"\nI0917 07:35:46.576171       1 pv_controller.go:823] claim \"persistent-local-volumes-test-213/pvc-5fbpz\" entered phase \"Bound\"\nE0917 07:35:46.636734       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6072/default: secrets \"default-token-5wkl6\" is forbidden: unable to create new content in namespace csi-mock-volumes-6072 because it is being terminated\nI0917 07:35:47.000625       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-8914/test-new-deployment\"\nI0917 07:35:47.018006       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:35:47.018167       1 event.go:291] \"Event occurred\" object=\"provisioning-5320/pod-b5eb5c58-63d3-4ed5-9841-a30220507841\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\\\" \"\nE0917 07:35:47.128876       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-41/default: secrets \"default-token-8vkgx\" is forbidden: unable to create new content in namespace disruption-41 because it is being terminated\nI0917 07:35:47.375621       1 pv_controller.go:879] volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" entered phase \"Bound\"\nI0917 07:35:47.375651       1 pv_controller.go:982] volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" bound to claim \"volume-expand-3273/awshnlvt\"\nI0917 07:35:47.385008       1 pv_controller.go:823] claim \"volume-expand-3273/awshnlvt\" entered phase \"Bound\"\nI0917 07:35:47.431913       1 namespace_controller.go:185] Namespace has been deleted projected-5968\nI0917 07:35:47.547048       1 namespace_controller.go:185] Namespace has been deleted secret-namespace-8399\nI0917 07:35:48.059541       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-007eae2e1b2db1b9f\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:35:48.322105       1 namespace_controller.go:185] Namespace has been deleted volume-6059\nE0917 07:35:48.672062       1 namespace_controller.go:162] deletion of namespace apply-8919 failed: unexpected items still remain in namespace: apply-8919 for gvr: /v1, Resource=pods\nI0917 07:35:48.744742       1 namespace_controller.go:185] Namespace has been deleted provisioning-7791\nI0917 07:35:48.890078       1 namespace_controller.go:185] Namespace has been deleted ephemeral-6332-1053\nI0917 07:35:48.938134       1 pv_controller.go:879] volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" entered phase \"Bound\"\nI0917 07:35:48.938280       1 pv_controller.go:982] volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" bound to claim \"topology-1325/pvc-xh8sk\"\nI0917 07:35:48.946042       1 pv_controller.go:823] claim \"topology-1325/pvc-xh8sk\" entered phase \"Bound\"\nI0917 07:35:48.997930       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-1453/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0917 07:35:48.999912       1 event.go:291] \"Event occurred\" object=\"webhook-1453/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0917 07:35:49.006236       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-1453/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:35:49.011615       1 event.go:291] \"Event occurred\" object=\"webhook-1453/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-k94wk\"\nI0917 07:35:49.140091       1 namespace_controller.go:185] Namespace has been deleted volume-6787\nE0917 07:35:49.170787       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:35:49.570735       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e1ee1b2dcf2199f9\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:35:49.751592       1 namespace_controller.go:185] Namespace has been deleted dns-9922\nI0917 07:35:49.779100       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b44c6e1e-d672-4189-8bf1-78ba65dcc9ea\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a4a67a4702949b81\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:35:49.781270       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-007eae2e1b2db1b9f\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:35:49.781508       1 event.go:291] \"Event occurred\" object=\"volume-expand-3273/pod-a6b17e37-0a22-4f42-abca-91219e68131c\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\\\" \"\nI0917 07:35:49.785934       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-b44c6e1e-d672-4189-8bf1-78ba65dcc9ea\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a4a67a4702949b81\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:35:49.911838       1 event.go:291] \"Event occurred\" object=\"volume-expand-1881/awszpttk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:49.914217       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-1881/awszpttk\"\nE0917 07:35:50.034544       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:35:50.379144       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-571/pvc-cscbc\"\nI0917 07:35:50.384993       1 pv_controller.go:640] volume \"local-2zjhj\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:35:50.388738       1 pv_controller.go:879] volume \"local-2zjhj\" entered phase \"Released\"\nI0917 07:35:50.448754       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6072-5523/csi-mockplugin-696ff69\" objectUID=305a3e01-1413-4549-a404-e51079b06843 kind=\"ControllerRevision\" virtual=false\nI0917 07:35:50.448918       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-6072-5523/csi-mockplugin\nI0917 07:35:50.448959       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6072-5523/csi-mockplugin-0\" objectUID=6e84e0da-b35b-443f-92bf-e0a9e9a30f6a kind=\"Pod\" virtual=false\nI0917 07:35:50.462888       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6072-5523/csi-mockplugin-696ff69\" objectUID=305a3e01-1413-4549-a404-e51079b06843 kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:35:50.463029       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6072-5523/csi-mockplugin-0\" objectUID=6e84e0da-b35b-443f-92bf-e0a9e9a30f6a kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:50.482284       1 pv_controller_base.go:505] deletion of claim \"provisioning-571/pvc-cscbc\" was already processed\nI0917 07:35:50.644474       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6072-5523/csi-mockplugin-attacher-57844cb4f\" objectUID=58c9592c-aa93-49cc-857f-3ac628b75069 kind=\"ControllerRevision\" virtual=false\nI0917 07:35:50.644548       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-6072-5523/csi-mockplugin-attacher\nI0917 07:35:50.644626       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6072-5523/csi-mockplugin-attacher-0\" objectUID=b4825881-0aea-418d-8aec-0916af1dad7f kind=\"Pod\" virtual=false\nI0917 07:35:50.646857       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6072-5523/csi-mockplugin-attacher-57844cb4f\" objectUID=58c9592c-aa93-49cc-857f-3ac628b75069 kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:35:50.647027       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6072-5523/csi-mockplugin-attacher-0\" objectUID=b4825881-0aea-418d-8aec-0916af1dad7f kind=\"Pod\" propagationPolicy=Background\nW0917 07:35:50.961740       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-3578/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0917 07:35:51.068470       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-445/aws52mff\"\nI0917 07:35:51.076446       1 pv_controller.go:640] volume \"pvc-b44c6e1e-d672-4189-8bf1-78ba65dcc9ea\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:35:51.079251       1 pv_controller.go:879] volume \"pvc-b44c6e1e-d672-4189-8bf1-78ba65dcc9ea\" entered phase \"Released\"\nI0917 07:35:51.082811       1 pv_controller.go:1340] isVolumeReleased[pvc-b44c6e1e-d672-4189-8bf1-78ba65dcc9ea]: volume is released\nI0917 07:35:51.685681       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6072\nE0917 07:35:51.777556       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:35:51.827100       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-213/pod-929f47f5-d2ed-4e56-a31e-d35fcf1911d0\" PVC=\"persistent-local-volumes-test-213/pvc-5fbpz\"\nI0917 07:35:51.827582       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-213/pvc-5fbpz\"\nI0917 07:35:53.364241       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e1ee1b2dcf2199f9\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:35:53.364479       1 event.go:291] \"Event occurred\" object=\"topology-1325/pod-5595f7f1-d9e6-4514-9335-3685772cff4a\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\\\" \"\nI0917 07:35:53.381672       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/agnhost-replica-8nf7h\" objectUID=a6c9c414-5ccf-4944-a0bf-ec61858765fc kind=\"EndpointSlice\" virtual=false\nI0917 07:35:53.385971       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/agnhost-replica-8nf7h\" objectUID=a6c9c414-5ccf-4944-a0bf-ec61858765fc kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:35:53.410174       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-1187/awsvslj9\"\nI0917 07:35:53.427106       1 pv_controller.go:640] volume \"pvc-a8772827-f97c-4a58-ab1f-61cf81b88754\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:35:53.432062       1 pv_controller.go:879] volume \"pvc-a8772827-f97c-4a58-ab1f-61cf81b88754\" entered phase \"Released\"\nI0917 07:35:53.438135       1 pv_controller.go:1340] isVolumeReleased[pvc-a8772827-f97c-4a58-ab1f-61cf81b88754]: volume is released\nI0917 07:35:53.593841       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-864fb64577 to 3\"\nI0917 07:35:53.593999       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-864fb64577\" need=3 creating=3\nI0917 07:35:53.604339       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-hlqbq\"\nI0917 07:35:53.606982       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:35:53.613691       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-fffqk\"\nI0917 07:35:53.626745       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-djdbc\"\nI0917 07:35:53.846441       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/agnhost-primary-v7hbf\" objectUID=f4ca6dd0-304a-4283-bd98-c98c0d705070 kind=\"EndpointSlice\" virtual=false\nI0917 07:35:53.849845       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/agnhost-primary-v7hbf\" objectUID=f4ca6dd0-304a-4283-bd98-c98c0d705070 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:35:54.127313       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1651/pvc-whmlw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:54.235141       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1651/pvc-whmlw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-1651\\\" or manually created by system administrator\"\nI0917 07:35:54.235621       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1651/pvc-whmlw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-1651\\\" or manually created by system administrator\"\nI0917 07:35:54.251004       1 pv_controller.go:879] volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" entered phase \"Bound\"\nI0917 07:35:54.251158       1 pv_controller.go:982] volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" bound to claim \"csi-mock-volumes-1651/pvc-whmlw\"\nI0917 07:35:54.257398       1 pv_controller.go:823] claim \"csi-mock-volumes-1651/pvc-whmlw\" entered phase \"Bound\"\nI0917 07:35:54.312959       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-3578/ss-677d6db895\" objectUID=9468bbda-9a88-41de-9f5b-ae0ae2829838 kind=\"ControllerRevision\" virtual=false\nI0917 07:35:54.313226       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/frontend-ggjbm\" objectUID=79fe6de7-a9ec-4962-8acf-5ecf287ec091 kind=\"EndpointSlice\" virtual=false\nI0917 07:35:54.313701       1 stateful_set.go:440] StatefulSet has been deleted statefulset-3578/ss\nI0917 07:35:54.316399       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-3578/ss-677d6db895\" objectUID=9468bbda-9a88-41de-9f5b-ae0ae2829838 kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:35:54.317607       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/frontend-ggjbm\" objectUID=79fe6de7-a9ec-4962-8acf-5ecf287ec091 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:35:54.768769       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/frontend-685fc574d5\" objectUID=0837f564-7710-4c34-87d0-ff636abf57d0 kind=\"ReplicaSet\" virtual=false\nI0917 07:35:54.769024       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-4300/frontend\"\nI0917 07:35:54.771103       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/frontend-685fc574d5\" objectUID=0837f564-7710-4c34-87d0-ff636abf57d0 kind=\"ReplicaSet\" propagationPolicy=Background\nI0917 07:35:54.773677       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/frontend-685fc574d5-dvxmz\" objectUID=bc157bac-d30d-48e5-9121-cf730fd29129 kind=\"Pod\" virtual=false\nI0917 07:35:54.773904       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/frontend-685fc574d5-r6ckm\" objectUID=1274a351-f89e-4c3a-b14d-ef4560b3dd1f kind=\"Pod\" virtual=false\nI0917 07:35:54.774051       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/frontend-685fc574d5-c7rr8\" objectUID=7569c35f-9523-456f-a25d-05647817ea47 kind=\"Pod\" virtual=false\nI0917 07:35:54.776160       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/frontend-685fc574d5-r6ckm\" objectUID=1274a351-f89e-4c3a-b14d-ef4560b3dd1f kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:54.776165       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/frontend-685fc574d5-c7rr8\" objectUID=7569c35f-9523-456f-a25d-05647817ea47 kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:54.776469       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/frontend-685fc574d5-dvxmz\" objectUID=bc157bac-d30d-48e5-9121-cf730fd29129 kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:55.225666       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/agnhost-primary-5db8ddd565\" objectUID=a1cd7860-428e-4afc-b55c-14960d75ea2b kind=\"ReplicaSet\" virtual=false\nI0917 07:35:55.226032       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-4300/agnhost-primary\"\nI0917 07:35:55.227556       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/agnhost-primary-5db8ddd565\" objectUID=a1cd7860-428e-4afc-b55c-14960d75ea2b kind=\"ReplicaSet\" propagationPolicy=Background\nI0917 07:35:55.237600       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/agnhost-primary-5db8ddd565-hkgsc\" objectUID=9458f2c2-9eb5-42ba-b861-bba0afd640d4 kind=\"Pod\" virtual=false\nI0917 07:35:55.239880       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1651^4\") from node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:35:55.242514       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/agnhost-primary-5db8ddd565-hkgsc\" objectUID=9458f2c2-9eb5-42ba-b861-bba0afd640d4 kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:55.388720       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-1474/slow-terminating-unready-pod\" need=1 creating=1\nI0917 07:35:55.394379       1 event.go:291] \"Event occurred\" object=\"services-1474/slow-terminating-unready-pod\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: slow-terminating-unready-pod-fl2jb\"\nI0917 07:35:55.454814       1 event.go:291] \"Event occurred\" object=\"provisioning-87/aws8s6v4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:55.632669       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-213/pod-929f47f5-d2ed-4e56-a31e-d35fcf1911d0\" PVC=\"persistent-local-volumes-test-213/pvc-5fbpz\"\nI0917 07:35:55.632844       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-213/pvc-5fbpz\"\nI0917 07:35:55.654337       1 event.go:291] \"Event occurred\" object=\"provisioning-87/aws8s6v4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:55.654364       1 event.go:291] \"Event occurred\" object=\"provisioning-87/aws8s6v4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:55.683881       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/agnhost-replica-6bcf79b489\" objectUID=7c4e4c8a-acfa-43f2-a40e-7d27422233ae kind=\"ReplicaSet\" virtual=false\nI0917 07:35:55.683914       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-4300/agnhost-replica\"\nI0917 07:35:55.685469       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/agnhost-replica-6bcf79b489\" objectUID=7c4e4c8a-acfa-43f2-a40e-7d27422233ae kind=\"ReplicaSet\" propagationPolicy=Background\nI0917 07:35:55.687474       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/agnhost-replica-6bcf79b489-hq54r\" objectUID=183979bd-e2b5-4db4-a49e-2da9edd4cec3 kind=\"Pod\" virtual=false\nI0917 07:35:55.687732       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4300/agnhost-replica-6bcf79b489-qpnp9\" objectUID=b8f95a8c-cc8a-42cb-904c-6509997654c0 kind=\"Pod\" virtual=false\nI0917 07:35:55.689656       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/agnhost-replica-6bcf79b489-hq54r\" objectUID=183979bd-e2b5-4db4-a49e-2da9edd4cec3 kind=\"Pod\" propagationPolicy=Background\nI0917 07:35:55.689820       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4300/agnhost-replica-6bcf79b489-qpnp9\" objectUID=b8f95a8c-cc8a-42cb-904c-6509997654c0 kind=\"Pod\" propagationPolicy=Background\nE0917 07:35:55.780493       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6072-5523/default: secrets \"default-token-c4vf8\" is forbidden: unable to create new content in namespace csi-mock-volumes-6072-5523 because it is being terminated\nI0917 07:35:55.802860       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1651^4\") from node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:35:55.803111       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1651/pvc-volume-tester-6tv52\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\\\" \"\nI0917 07:35:56.030042       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-213/pod-929f47f5-d2ed-4e56-a31e-d35fcf1911d0\" PVC=\"persistent-local-volumes-test-213/pvc-5fbpz\"\nI0917 07:35:56.030776       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-213/pvc-5fbpz\"\nI0917 07:35:56.034550       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-213/pvc-5fbpz\"\nI0917 07:35:56.039613       1 pv_controller.go:640] volume \"local-pvsk4cv\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:35:56.043605       1 pv_controller.go:879] volume \"local-pvsk4cv\" entered phase \"Released\"\nI0917 07:35:56.046613       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-213/pvc-5fbpz\" was already processed\nI0917 07:35:56.591798       1 pv_controller.go:1340] isVolumeReleased[pvc-b44c6e1e-d672-4189-8bf1-78ba65dcc9ea]: volume is released\nI0917 07:35:56.676512       1 event.go:291] \"Event occurred\" object=\"provisioning-9285/aws5ns7p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:56.700471       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-445/aws52mff\" was already processed\nI0917 07:35:56.793608       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-b44c6e1e-d672-4189-8bf1-78ba65dcc9ea\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a4a67a4702949b81\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:35:56.878862       1 event.go:291] \"Event occurred\" object=\"provisioning-9285/aws5ns7p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:35:57.070532       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-a8772827-f97c-4a58-ab1f-61cf81b88754\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03e58ef53f3a96c7e\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:35:57.073202       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-a8772827-f97c-4a58-ab1f-61cf81b88754\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03e58ef53f3a96c7e\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nE0917 07:35:57.098106       1 tokens_controller.go:262] error synchronizing serviceaccount pods-246/default: secrets \"default-token-n8f88\" is forbidden: unable to create new content in namespace pods-246 because it is being terminated\nW0917 07:35:57.170872       1 reconciler.go:376] Multi-Attach error for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" Volume is already used by pods provisioning-5320/pod-b5eb5c58-63d3-4ed5-9841-a30220507841 on node ip-172-20-60-186.eu-west-2.compute.internal\nI0917 07:35:57.170948       1 event.go:291] \"Event occurred\" object=\"provisioning-5320/pvc-volume-tester-writer-djzbb\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\\\" Volume is already used by pod(s) pod-b5eb5c58-63d3-4ed5-9841-a30220507841\"\nI0917 07:35:57.253015       1 event.go:291] \"Event occurred\" object=\"provisioning-5165-5566/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0917 07:35:57.717730       1 namespace_controller.go:185] Namespace has been deleted emptydir-2134\nI0917 07:35:57.734037       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/pvc-nlftn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5165\\\" or manually created by system administrator\"\nI0917 07:35:57.734337       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/pvc-nlftn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5165\\\" or manually created by system administrator\"\nI0917 07:35:57.988427       1 controller.go:400] Ensuring load balancer for service deployment-505/test-rolling-update-with-lb\nI0917 07:35:57.990933       1 controller.go:901] Adding finalizer to service deployment-505/test-rolling-update-with-lb\nI0917 07:35:57.991245       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI0917 07:35:58.007132       1 aws.go:3915] EnsureLoadBalancer(e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io, deployment-505, test-rolling-update-with-lb, eu-west-2, , [{ TCP <nil> 80 {0 80 } 32377}], map[])\nI0917 07:35:58.529344       1 event.go:291] \"Event occurred\" object=\"volume-expand-3273/awshnlvt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalExpanding\" message=\"CSI migration enabled for kubernetes.io/aws-ebs; waiting for external resizer to expand the pvc\"\nI0917 07:35:58.750963       1 aws.go:3136] Existing security group ingress: sg-0e2e2a268790913bf []\nI0917 07:35:58.751135       1 aws.go:3167] Adding security group ingress: sg-0e2e2a268790913bf [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI0917 07:35:58.869748       1 aws_loadbalancer.go:1009] Creating load balancer for deployment-505/test-rolling-update-with-lb with name: aeec8dc081f264bbaa0f13c10fd11c5a\nE0917 07:35:58.910626       1 publisher.go:173] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-1453.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0917 07:35:58.924539       1 publisher.go:173] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-1453.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0917 07:35:58.940042       1 publisher.go:173] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-1453.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0917 07:35:58.965616       1 publisher.go:173] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-1453.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nI0917 07:35:59.018805       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"services-1300/pause-pod\"\nE0917 07:35:59.034846       1 publisher.go:173] syncing \"fail-closed-namesapce\" failed: Internal error occurred: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-1453.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nI0917 07:35:59.047306       1 pv_controller.go:879] volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" entered phase \"Bound\"\nI0917 07:35:59.047339       1 pv_controller.go:982] volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" bound to claim \"provisioning-87/aws8s6v4\"\nI0917 07:35:59.056797       1 pv_controller.go:823] claim \"provisioning-87/aws8s6v4\" entered phase \"Bound\"\nI0917 07:35:59.312350       1 event.go:291] \"Event occurred\" object=\"volume-expand-3219/awsb6rc9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:35:59.314380       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-3219/awsb6rc9\"\nI0917 07:35:59.505329       1 aws_loadbalancer.go:1212] Updating load-balancer attributes for \"aeec8dc081f264bbaa0f13c10fd11c5a\"\nE0917 07:35:59.510130       1 controller.go:307] error processing service deployment-505/test-rolling-update-with-lb (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io/i-09ba2e67462e963e9 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-west-2:768319786644:loadbalancer/aeec8dc081f264bbaa0f13c10fd11c5a\\n\\tstatus code: 403, request id: b74cabe7-d336-4188-9152-1bdf934322f0\"\nI0917 07:35:59.510471       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"SyncLoadBalancerFailed\" message=\"Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \\\"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io/i-09ba2e67462e963e9 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-west-2:768319786644:loadbalancer/aeec8dc081f264bbaa0f13c10fd11c5a\\\\n\\\\tstatus code: 403, request id: b74cabe7-d336-4188-9152-1bdf934322f0\\\"\"\nI0917 07:35:59.585309       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1453/e2e-test-webhook-g8vj7\" objectUID=c0f9627f-4dea-4359-9c66-02ccd75d172e kind=\"EndpointSlice\" virtual=false\nI0917 07:35:59.592142       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1453/e2e-test-webhook-g8vj7\" objectUID=c0f9627f-4dea-4359-9c66-02ccd75d172e kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:35:59.688912       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-1453/sample-webhook-deployment\"\nI0917 07:35:59.688921       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1453/sample-webhook-deployment-78988fc6cd\" objectUID=4b3c01f1-b5ba-4f02-b2e2-b1ffc49ccfdc kind=\"ReplicaSet\" virtual=false\nI0917 07:35:59.690660       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1453/sample-webhook-deployment-78988fc6cd\" objectUID=4b3c01f1-b5ba-4f02-b2e2-b1ffc49ccfdc kind=\"ReplicaSet\" propagationPolicy=Background\nI0917 07:35:59.692530       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d40b921d2b3fa40b\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:35:59.693430       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1453/sample-webhook-deployment-78988fc6cd-k94wk\" objectUID=61c8b8bc-6233-4b45-950b-6050d748ac49 kind=\"Pod\" virtual=false\nI0917 07:35:59.696388       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1453/sample-webhook-deployment-78988fc6cd-k94wk\" objectUID=61c8b8bc-6233-4b45-950b-6050d748ac49 kind=\"Pod\" propagationPolicy=Background\nE0917 07:35:59.770053       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-3578/default: secrets \"default-token-ncbm9\" is forbidden: unable to create new content in namespace statefulset-3578 because it is being terminated\nE0917 07:35:59.838308       1 tokens_controller.go:262] error synchronizing serviceaccount events-2665/default: secrets \"default-token-zsb2j\" is forbidden: unable to create new content in namespace events-2665 because it is being terminated\nI0917 07:35:59.855935       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-3578/test-bpng4\" objectUID=6dd8034b-1999-4214-824e-82ea7280e317 kind=\"EndpointSlice\" virtual=false\nI0917 07:35:59.859422       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-3578/test-bpng4\" objectUID=6dd8034b-1999-4214-824e-82ea7280e317 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:36:00.205493       1 pv_controller.go:930] claim \"volume-8696/pvc-7kgb6\" bound to volume \"local-wlppz\"\nI0917 07:36:00.205873       1 event.go:291] \"Event occurred\" object=\"provisioning-9285/aws5ns7p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:00.205901       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/pvc-nlftn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5165\\\" or manually created by system administrator\"\nI0917 07:36:00.209439       1 pv_controller.go:1340] isVolumeReleased[pvc-a8772827-f97c-4a58-ab1f-61cf81b88754]: volume is released\nI0917 07:36:00.216385       1 pv_controller.go:879] volume \"local-wlppz\" entered phase \"Bound\"\nI0917 07:36:00.216416       1 pv_controller.go:982] volume \"local-wlppz\" bound to claim \"volume-8696/pvc-7kgb6\"\nI0917 07:36:00.224100       1 pv_controller.go:823] claim \"volume-8696/pvc-7kgb6\" entered phase \"Bound\"\nI0917 07:36:00.225122       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-6/pvc-blrdj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:00.253732       1 pv_controller.go:879] volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" entered phase \"Bound\"\nI0917 07:36:00.253767       1 pv_controller.go:982] volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" bound to claim \"provisioning-9285/aws5ns7p\"\nI0917 07:36:00.261007       1 pv_controller.go:823] claim \"provisioning-9285/aws5ns7p\" entered phase \"Bound\"\nI0917 07:36:00.338512       1 namespace_controller.go:185] Namespace has been deleted volume-expand-1881\nE0917 07:36:00.849575       1 tokens_controller.go:262] error synchronizing serviceaccount projected-9394/default: secrets \"default-token-n4ffv\" is forbidden: unable to create new content in namespace projected-9394 because it is being terminated\nI0917 07:36:00.902885       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09a510e110b0280e1\") from node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:36:00.952793       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6072-5523\nE0917 07:36:01.025809       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-4300/default: secrets \"default-token-qpj6x\" is forbidden: unable to create new content in namespace kubectl-4300 because it is being terminated\nI0917 07:36:02.171018       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-6912/rs\" need=3 creating=3\nI0917 07:36:02.178043       1 event.go:291] \"Event occurred\" object=\"disruption-6912/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-bjpfg\"\nI0917 07:36:02.185240       1 event.go:291] \"Event occurred\" object=\"disruption-6912/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-mqchq\"\nI0917 07:36:02.189644       1 event.go:291] \"Event occurred\" object=\"disruption-6912/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-75n2w\"\nE0917 07:36:02.214740       1 disruption.go:534] Error syncing PodDisruptionBudget disruption-6912/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nI0917 07:36:02.344644       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4694/test-rs\" need=1 creating=1\nI0917 07:36:02.348625       1 event.go:291] \"Event occurred\" object=\"replicaset-4694/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-67s87\"\nI0917 07:36:02.466283       1 pv_controller.go:879] volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" entered phase \"Bound\"\nI0917 07:36:02.466502       1 pv_controller.go:982] volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" bound to claim \"provisioning-5165/pvc-nlftn\"\nI0917 07:36:02.472089       1 pv_controller.go:823] claim \"provisioning-5165/pvc-nlftn\" entered phase \"Bound\"\nI0917 07:36:03.028843       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-213\nI0917 07:36:03.308285       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09a510e110b0280e1\") from node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:36:03.308496       1 event.go:291] \"Event occurred\" object=\"provisioning-9285/pod-subpath-test-dynamicpv-qc4x\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\\\" \"\nE0917 07:36:03.348680       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-9349/pvc-tx5sn: storageclass.storage.k8s.io \"provisioning-9349\" not found\nI0917 07:36:03.348882       1 event.go:291] \"Event occurred\" object=\"provisioning-9349/pvc-tx5sn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9349\\\" not found\"\nI0917 07:36:03.449830       1 pv_controller.go:879] volume \"local-29fhp\" entered phase \"Available\"\nI0917 07:36:03.822164       1 namespace_controller.go:185] Namespace has been deleted provisioning-571\nI0917 07:36:04.011009       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-8696/pvc-7kgb6\"\nI0917 07:36:04.015816       1 pv_controller.go:640] volume \"local-wlppz\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:04.019417       1 pv_controller.go:879] volume \"local-wlppz\" entered phase \"Released\"\nI0917 07:36:04.116399       1 pv_controller_base.go:505] deletion of claim \"volume-8696/pvc-7kgb6\" was already processed\nE0917 07:36:04.161308       1 tokens_controller.go:262] error synchronizing serviceaccount fail-closed-namesapce/default: secrets \"default-token-wprjn\" is forbidden: unable to create new content in namespace fail-closed-namesapce because it is being terminated\nE0917 07:36:04.428083       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-1453/default: secrets \"default-token-wjfqh\" is forbidden: unable to create new content in namespace webhook-1453 because it is being terminated\nI0917 07:36:04.510793       1 controller.go:400] Ensuring load balancer for service deployment-505/test-rolling-update-with-lb\nI0917 07:36:04.510861       1 aws.go:3915] EnsureLoadBalancer(e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io, deployment-505, test-rolling-update-with-lb, eu-west-2, , [{ TCP <nil> 80 {0 80 } 32377}], map[])\nI0917 07:36:04.511203       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nE0917 07:36:04.548100       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-1453-markers/default: secrets \"default-token-w497b\" is forbidden: unable to create new content in namespace webhook-1453-markers because it is being terminated\nI0917 07:36:04.743883       1 aws.go:3136] Existing security group ingress: sg-0e2e2a268790913bf [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nE0917 07:36:04.762215       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8342/pvc-nlwd7: storageclass.storage.k8s.io \"provisioning-8342\" not found\nI0917 07:36:04.762455       1 event.go:291] \"Event occurred\" object=\"provisioning-8342/pvc-nlwd7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8342\\\" not found\"\nI0917 07:36:04.788510       1 aws_loadbalancer.go:1185] Creating additional load balancer tags for aeec8dc081f264bbaa0f13c10fd11c5a\nI0917 07:36:04.810577       1 aws_loadbalancer.go:1212] Updating load-balancer attributes for \"aeec8dc081f264bbaa0f13c10fd11c5a\"\nE0917 07:36:04.815494       1 controller.go:307] error processing service deployment-505/test-rolling-update-with-lb (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io/i-09ba2e67462e963e9 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-west-2:768319786644:loadbalancer/aeec8dc081f264bbaa0f13c10fd11c5a\\n\\tstatus code: 403, request id: ef01dc0c-ab04-4f79-a797-f204e92e5d33\"\nI0917 07:36:04.815605       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"SyncLoadBalancerFailed\" message=\"Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \\\"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io/i-09ba2e67462e963e9 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-west-2:768319786644:loadbalancer/aeec8dc081f264bbaa0f13c10fd11c5a\\\\n\\\\tstatus code: 403, request id: ef01dc0c-ab04-4f79-a797-f204e92e5d33\\\"\"\nI0917 07:36:04.834063       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-5104/inline-volume-tester-fdqb4\" PVC=\"ephemeral-5104/inline-volume-tester-fdqb4-my-volume-0\"\nI0917 07:36:04.834252       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-5104/inline-volume-tester-fdqb4-my-volume-0\"\nI0917 07:36:04.863169       1 pv_controller.go:879] volume \"local-lp2m6\" entered phase \"Available\"\nI0917 07:36:04.925283       1 namespace_controller.go:185] Namespace has been deleted statefulset-3578\nI0917 07:36:04.957453       1 namespace_controller.go:185] Namespace has been deleted events-2665\nI0917 07:36:05.035718       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-5104/inline-volume-tester-fdqb4-my-volume-0\"\nI0917 07:36:05.040831       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5104/inline-volume-tester-fdqb4\" objectUID=55087449-f4ee-4c96-a033-02b883120cd0 kind=\"Pod\" virtual=false\nI0917 07:36:05.042945       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-5104, name: inline-volume-tester-fdqb4, uid: 55087449-f4ee-4c96-a033-02b883120cd0]\nI0917 07:36:05.043089       1 pv_controller.go:640] volume \"pvc-c7ad0beb-8c12-4d90-9e03-4cd0a4c6d6fb\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:05.045540       1 pv_controller.go:879] volume \"pvc-c7ad0beb-8c12-4d90-9e03-4cd0a4c6d6fb\" entered phase \"Released\"\nI0917 07:36:05.051882       1 pv_controller.go:1340] isVolumeReleased[pvc-c7ad0beb-8c12-4d90-9e03-4cd0a4c6d6fb]: volume is released\nI0917 07:36:05.067197       1 pv_controller_base.go:505] deletion of claim \"ephemeral-5104/inline-volume-tester-fdqb4-my-volume-0\" was already processed\nI0917 07:36:05.642019       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^e8bd503a-1789-11ec-bc38-9a0e93fd5cd1\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:05.987244       1 namespace_controller.go:185] Namespace has been deleted projected-9394\nI0917 07:36:06.194409       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^e8bd503a-1789-11ec-bc38-9a0e93fd5cd1\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:06.194662       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\\\" \"\nE0917 07:36:06.195613       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:06.210303       1 namespace_controller.go:185] Namespace has been deleted kubectl-4300\nE0917 07:36:06.462056       1 tokens_controller.go:262] error synchronizing serviceaccount projected-3312/default: secrets \"default-token-c52rz\" is forbidden: unable to create new content in namespace projected-3312 because it is being terminated\nI0917 07:36:06.649362       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"topology-1325/pvc-xh8sk\"\nI0917 07:36:06.660346       1 pv_controller.go:640] volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:06.670999       1 pv_controller.go:879] volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" entered phase \"Released\"\nI0917 07:36:06.681778       1 pv_controller.go:1340] isVolumeReleased[pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05]: volume is released\nI0917 07:36:07.071784       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:07.084639       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:07.092458       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e1ee1b2dcf2199f9\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:36:07.098973       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:07.102267       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e1ee1b2dcf2199f9\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:07.144103       1 event.go:291] \"Event occurred\" object=\"volume-expand-3559/awst8tx2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:36:07.315820       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3890^4\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:07.322698       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3890^4\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:07.374563       1 event.go:291] \"Event occurred\" object=\"volume-expand-3559/awst8tx2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:07.478854       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d40b921d2b3fa40b\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:07.479358       1 event.go:291] \"Event occurred\" object=\"provisioning-87/pod-subpath-test-dynamicpv-s9zr\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\\\" \"\nI0917 07:36:07.895128       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3890^4\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:08.046578       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"services-1474/slow-terminating-unready-pod\" need=0 deleting=1\nE0917 07:36:08.046900       1 replica_set.go:205] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{slow-terminating-unready-pod  services-1474  ade814e0-d317-4223-8616-4f959a0357b9 36705 2 2021-09-17 07:35:55 +0000 UTC <nil> <nil> map[name:slow-terminating-unready-pod testid:tolerate-unready-e7abc627-d0ca-4d7a-8de9-9a7760e7d799] map[] [] []  [{e2e.test Update v1 2021-09-17 07:35:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:name\":{},\"f:testid\":{}}},\"f:spec\":{\"f:selector\":{},\"f:template\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{},\"f:labels\":{\".\":{},\"f:name\":{},\"f:testid\":{}}},\"f:spec\":{\".\":{},\"f:containers\":{\".\":{},\"k:{\\\"name\\\":\\\"slow-terminating-unready-pod\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:lifecycle\":{\".\":{},\"f:preStop\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}}}},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":80,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:protocol\":{}}},\"f:readinessProbe\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}},\"f:failureThreshold\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}} } {kube-controller-manager Update v1 2021-09-17 07:35:55 +0000 UTC FieldsV1 {\"f:status\":{\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:replicas\":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: slow-terminating-unready-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:slow-terminating-unready-pod testid:tolerate-unready-e7abc627-d0ca-4d7a-8de9-9a7760e7d799] map[] [] []  []} {[] [] [{slow-terminating-unready-pod k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [netexec --http-port=80]  [{ 0 80 TCP }] [] [] {map[] map[]} [] [] nil Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/false],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,} nil &Lifecycle{PostStart:nil,PreStop:&Handler{Exec:&ExecAction{Command:[/bin/sleep 600],},HTTPGet:nil,TCPSocket:nil,},} /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002a10888 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}\nI0917 07:36:08.047007       1 controller_utils.go:592] \"Deleting pod\" controller=\"slow-terminating-unready-pod\" pod=\"services-1474/slow-terminating-unready-pod-fl2jb\"\nI0917 07:36:08.050114       1 event.go:291] \"Event occurred\" object=\"services-1474/slow-terminating-unready-pod\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: slow-terminating-unready-pod-fl2jb\"\nE0917 07:36:08.401026       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:08.754197       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-3890/pvc-xh6dd\"\nI0917 07:36:08.760583       1 pv_controller.go:640] volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:08.763249       1 pv_controller.go:879] volume \"pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3\" entered phase \"Released\"\nI0917 07:36:08.765402       1 pv_controller.go:1340] isVolumeReleased[pvc-140cb3f8-7176-40ae-950f-c47ef905f1e3]: volume is released\nI0917 07:36:08.787350       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-3890/pvc-xh6dd\" was already processed\nI0917 07:36:08.831812       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4694/test-rs\" need=2 creating=1\nI0917 07:36:08.837401       1 event.go:291] \"Event occurred\" object=\"replicaset-4694/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-kv4zk\"\nI0917 07:36:09.025522       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4694/test-rs\" need=4 creating=2\nI0917 07:36:09.030829       1 event.go:291] \"Event occurred\" object=\"replicaset-4694/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-8bfdz\"\nI0917 07:36:09.047894       1 event.go:291] \"Event occurred\" object=\"replicaset-4694/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-6s4wr\"\nI0917 07:36:09.214361       1 namespace_controller.go:185] Namespace has been deleted fail-closed-namesapce\nE0917 07:36:09.258258       1 namespace_controller.go:162] deletion of namespace apply-8919 failed: unexpected items still remain in namespace: apply-8919 for gvr: /v1, Resource=pods\nI0917 07:36:09.499658       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-3273/awshnlvt\"\nI0917 07:36:09.508104       1 pv_controller.go:640] volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:09.512021       1 pv_controller.go:879] volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" entered phase \"Released\"\nI0917 07:36:09.513920       1 pv_controller.go:1340] isVolumeReleased[pvc-c30b98b5-74e7-4290-8f1c-85c24039a101]: volume is released\nI0917 07:36:09.532451       1 namespace_controller.go:185] Namespace has been deleted webhook-1453\nI0917 07:36:09.696394       1 namespace_controller.go:185] Namespace has been deleted webhook-1453-markers\nI0917 07:36:09.747556       1 namespace_controller.go:185] Namespace has been deleted volume-expand-3219\nI0917 07:36:09.759811       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-007eae2e1b2db1b9f\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:09.763883       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-007eae2e1b2db1b9f\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:09.774460       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-c7ad0beb-8c12-4d90-9e03-4cd0a4c6d6fb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5104^b13b273e-1789-11ec-8103-3eab9e6bf0f7\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:09.778933       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-c7ad0beb-8c12-4d90-9e03-4cd0a4c6d6fb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5104^b13b273e-1789-11ec-8103-3eab9e6bf0f7\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:09.834068       1 pv_controller.go:1340] isVolumeReleased[pvc-a8772827-f97c-4a58-ab1f-61cf81b88754]: volume is released\nI0917 07:36:09.980055       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-1187/awsvslj9\" was already processed\nI0917 07:36:10.217293       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-a8772827-f97c-4a58-ab1f-61cf81b88754\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03e58ef53f3a96c7e\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:10.339038       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-c7ad0beb-8c12-4d90-9e03-4cd0a4c6d6fb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5104^b13b273e-1789-11ec-8103-3eab9e6bf0f7\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nE0917 07:36:10.350156       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nE0917 07:36:10.453681       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nE0917 07:36:10.559304       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nE0917 07:36:10.678914       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:10.841451       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9285/aws5ns7p\"\nI0917 07:36:10.849562       1 pv_controller.go:640] volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:10.854328       1 pv_controller.go:879] volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" entered phase \"Released\"\nI0917 07:36:10.861221       1 pv_controller.go:1340] isVolumeReleased[pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500]: volume is released\nE0917 07:36:10.888646       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:10.938076       1 pv_controller.go:879] volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" entered phase \"Bound\"\nI0917 07:36:10.938106       1 pv_controller.go:982] volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" bound to claim \"volume-expand-3559/awst8tx2\"\nI0917 07:36:10.945770       1 pv_controller.go:823] claim \"volume-expand-3559/awst8tx2\" entered phase \"Bound\"\nI0917 07:36:11.004389       1 pv_controller.go:879] volume \"local-pvgqbxx\" entered phase \"Available\"\nI0917 07:36:11.096282       1 pv_controller.go:930] claim \"persistent-local-volumes-test-7417/pvc-plds6\" bound to volume \"local-pvgqbxx\"\nI0917 07:36:11.129440       1 pv_controller.go:879] volume \"local-pvgqbxx\" entered phase \"Bound\"\nI0917 07:36:11.129843       1 pv_controller.go:982] volume \"local-pvgqbxx\" bound to claim \"persistent-local-volumes-test-7417/pvc-plds6\"\nE0917 07:36:11.141446       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:11.168172       1 pv_controller.go:823] claim \"persistent-local-volumes-test-7417/pvc-plds6\" entered phase \"Bound\"\nI0917 07:36:11.216552       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-6912/rs\" need=3 creating=1\nI0917 07:36:11.232797       1 event.go:291] \"Event occurred\" object=\"disruption-6912/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-hq6tn\"\nE0917 07:36:11.312219       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:11.397816       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:36:11.450617       1 tokens_controller.go:262] error synchronizing serviceaccount gc-1124/default: secrets \"default-token-tqwlz\" is forbidden: unable to create new content in namespace gc-1124 because it is being terminated\nI0917 07:36:11.482256       1 namespace_controller.go:185] Namespace has been deleted projected-3312\nE0917 07:36:11.551992       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:11.616344       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-7417/pvc-plds6\"\nI0917 07:36:11.625838       1 pv_controller.go:640] volume \"local-pvgqbxx\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:11.632733       1 pv_controller.go:879] volume \"local-pvgqbxx\" entered phase \"Released\"\nI0917 07:36:11.711515       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-7417/pvc-plds6\" was already processed\nE0917 07:36:11.997784       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:12.211243       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-445\nE0917 07:36:12.279666       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-625/default: secrets \"default-token-l68h7\" is forbidden: unable to create new content in namespace disruption-625 because it is being terminated\nE0917 07:36:12.735668       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:13.131830       1 pv_controller.go:879] volume \"local-pvgpp78\" entered phase \"Available\"\nI0917 07:36:13.223453       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09a510e110b0280e1\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:36:13.226749       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09a510e110b0280e1\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:36:13.227441       1 pv_controller.go:930] claim \"persistent-local-volumes-test-8962/pvc-m8r9t\" bound to volume \"local-pvgpp78\"\nI0917 07:36:13.239636       1 pv_controller.go:879] volume \"local-pvgpp78\" entered phase \"Bound\"\nI0917 07:36:13.239664       1 pv_controller.go:982] volume \"local-pvgpp78\" bound to claim \"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:13.240677       1 garbagecollector.go:471] \"Processing object\" object=\"services-1474/tolerate-unready-ztw4m\" objectUID=c54bc60b-d489-42a6-b77c-10c667e205a4 kind=\"EndpointSlice\" virtual=false\nI0917 07:36:13.248362       1 garbagecollector.go:580] \"Deleting object\" object=\"services-1474/tolerate-unready-ztw4m\" objectUID=c54bc60b-d489-42a6-b77c-10c667e205a4 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:36:13.261777       1 pv_controller.go:823] claim \"persistent-local-volumes-test-8962/pvc-m8r9t\" entered phase \"Bound\"\nI0917 07:36:13.597583       1 namespace_controller.go:185] Namespace has been deleted disruption-41\nE0917 07:36:14.088435       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-3890/default: secrets \"default-token-9cpp9\" is forbidden: unable to create new content in namespace csi-mock-volumes-3890 because it is being terminated\nE0917 07:36:14.178746       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:14.305017       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-6912/rs\" need=3 creating=1\nI0917 07:36:14.312913       1 event.go:291] \"Event occurred\" object=\"disruption-6912/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-458kx\"\nE0917 07:36:14.346025       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:14.477408       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4694/test-rs\" need=4 creating=1\nI0917 07:36:14.816504       1 controller.go:400] Ensuring load balancer for service deployment-505/test-rolling-update-with-lb\nI0917 07:36:14.816552       1 aws.go:3915] EnsureLoadBalancer(e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io, deployment-505, test-rolling-update-with-lb, eu-west-2, , [{ TCP <nil> 80 {0 80 } 32377}], map[])\nI0917 07:36:14.816708       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI0917 07:36:14.847622       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:14.919364       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-1302/csi-hostpathmrgrr\"\nI0917 07:36:14.926436       1 pv_controller.go:640] volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:14.929474       1 pv_controller.go:879] volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" entered phase \"Released\"\nI0917 07:36:14.931370       1 pv_controller.go:1340] isVolumeReleased[pvc-3031d0cc-7382-48b2-890e-48154fef2540]: volume is released\nI0917 07:36:14.936683       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:14.945850       1 pv_controller_base.go:505] deletion of claim \"provisioning-1302/csi-hostpathmrgrr\" was already processed\nI0917 07:36:15.000302       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-1780/sample-webhook-deployment\"\nI0917 07:36:15.115769       1 aws.go:3136] Existing security group ingress: sg-0e2e2a268790913bf [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI0917 07:36:15.196345       1 aws_loadbalancer.go:1185] Creating additional load balancer tags for aeec8dc081f264bbaa0f13c10fd11c5a\nI0917 07:36:15.205957       1 pv_controller.go:930] claim \"provisioning-9349/pvc-tx5sn\" bound to volume \"local-29fhp\"\nI0917 07:36:15.209919       1 pv_controller.go:1340] isVolumeReleased[pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05]: volume is released\nI0917 07:36:15.212138       1 pv_controller.go:1340] isVolumeReleased[pvc-c30b98b5-74e7-4290-8f1c-85c24039a101]: volume is released\nI0917 07:36:15.212590       1 pv_controller.go:1340] isVolumeReleased[pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500]: volume is released\nI0917 07:36:15.216936       1 pv_controller.go:879] volume \"local-29fhp\" entered phase \"Bound\"\nI0917 07:36:15.217100       1 pv_controller.go:982] volume \"local-29fhp\" bound to claim \"provisioning-9349/pvc-tx5sn\"\nI0917 07:36:15.224300       1 pv_controller.go:823] claim \"provisioning-9349/pvc-tx5sn\" entered phase \"Bound\"\nI0917 07:36:15.224727       1 pv_controller.go:930] claim \"provisioning-8342/pvc-nlwd7\" bound to volume \"local-lp2m6\"\nI0917 07:36:15.226498       1 aws_loadbalancer.go:1212] Updating load-balancer attributes for \"aeec8dc081f264bbaa0f13c10fd11c5a\"\nI0917 07:36:15.235956       1 pv_controller.go:879] volume \"local-lp2m6\" entered phase \"Bound\"\nI0917 07:36:15.235986       1 pv_controller.go:982] volume \"local-lp2m6\" bound to claim \"provisioning-8342/pvc-nlwd7\"\nI0917 07:36:15.245457       1 pv_controller.go:823] claim \"provisioning-8342/pvc-nlwd7\" entered phase \"Bound\"\nI0917 07:36:15.246063       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-6/pvc-blrdj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:15.333474       1 namespace_controller.go:185] Namespace has been deleted volume-8696\nI0917 07:36:15.486938       1 aws.go:4534] Adding rule for traffic from the load balancer (sg-0e2e2a268790913bf) to instances (sg-0821880f91f33c471)\nI0917 07:36:15.547896       1 aws.go:3211] Existing security group ingress: sg-0821880f91f33c471 [{\n  FromPort: 30000,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 32767\n} {\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-04504549bb9433bae\",\n      UserId: \"768319786644\"\n    },{\n      GroupId: \"sg-0821880f91f33c471\",\n      UserId: \"768319786644\"\n    }]\n} {\n  FromPort: 22,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"35.184.107.109/32\"\n    }],\n  ToPort: 22\n} {\n  FromPort: 30000,\n  IpProtocol: \"udp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 32767\n}]\nI0917 07:36:15.547954       1 aws.go:3108] Comparing sg-0e2e2a268790913bf to sg-04504549bb9433bae\nI0917 07:36:15.547960       1 aws.go:3108] Comparing sg-0e2e2a268790913bf to sg-0821880f91f33c471\nI0917 07:36:15.547965       1 aws.go:3239] Adding security group ingress: sg-0821880f91f33c471 [{\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-0e2e2a268790913bf\"\n    }]\n}]\nI0917 07:36:15.655894       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-5104-3793/csi-hostpathplugin\nI0917 07:36:15.656022       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5104-3793/csi-hostpathplugin-0\" objectUID=b8a3cb52-7d01-4f28-bd1b-d0742c240417 kind=\"Pod\" virtual=false\nI0917 07:36:15.656337       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5104-3793/csi-hostpathplugin-8447b9f796\" objectUID=7a4049fc-18d9-426f-b8de-1967889c276c kind=\"ControllerRevision\" virtual=false\nI0917 07:36:15.658453       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5104-3793/csi-hostpathplugin-8447b9f796\" objectUID=7a4049fc-18d9-426f-b8de-1967889c276c kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:36:15.659557       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5104-3793/csi-hostpathplugin-0\" objectUID=b8a3cb52-7d01-4f28-bd1b-d0742c240417 kind=\"Pod\" propagationPolicy=Background\nI0917 07:36:15.914834       1 aws_loadbalancer.go:1460] Instances added to load-balancer aeec8dc081f264bbaa0f13c10fd11c5a\nI0917 07:36:15.914913       1 aws.go:4300] Loadbalancer aeec8dc081f264bbaa0f13c10fd11c5a (deployment-505/test-rolling-update-with-lb) has DNS name aeec8dc081f264bbaa0f13c10fd11c5a-2025392810.eu-west-2.elb.amazonaws.com\nI0917 07:36:15.914978       1 controller.go:942] Patching status for service deployment-505/test-rolling-update-with-lb\nI0917 07:36:15.915291       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuredLoadBalancer\" message=\"Ensured load balancer\"\nI0917 07:36:16.050201       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5104\nE0917 07:36:16.068140       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:16.564496       1 namespace_controller.go:185] Namespace has been deleted gc-1124\nE0917 07:36:16.855311       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:16.862336       1 pv_controller.go:1340] isVolumeReleased[pvc-c30b98b5-74e7-4290-8f1c-85c24039a101]: volume is released\nI0917 07:36:17.007520       1 pv_controller_base.go:505] deletion of claim \"volume-expand-3273/awshnlvt\" was already processed\nI0917 07:36:17.060162       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1302^dbe49d1b-1789-11ec-a2e2-a6341466e799\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:17.062289       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1302^dbe49d1b-1789-11ec-a2e2-a6341466e799\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:17.109397       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-1651/pvc-whmlw\"\nI0917 07:36:17.116802       1 pv_controller.go:640] volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:17.121281       1 pv_controller.go:879] volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" entered phase \"Released\"\nI0917 07:36:17.125184       1 pv_controller.go:1340] isVolumeReleased[pvc-d055fc8c-664b-4b48-9818-58e545fd8248]: volume is released\nI0917 07:36:17.309328       1 namespace_controller.go:185] Namespace has been deleted disruption-625\nI0917 07:36:17.332914       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:17.333119       1 event.go:291] \"Event occurred\" object=\"provisioning-5320/pvc-volume-tester-writer-djzbb\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\\\" \"\nI0917 07:36:17.599741       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-c30b98b5-74e7-4290-8f1c-85c24039a101\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-007eae2e1b2db1b9f\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:17.619834       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-3031d0cc-7382-48b2-890e-48154fef2540\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1302^dbe49d1b-1789-11ec-a2e2-a6341466e799\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:36:17.762914       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-7417/default: secrets \"default-token-76md2\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-7417 because it is being terminated\nI0917 07:36:17.982797       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-3890-6156/csi-mockplugin\nI0917 07:36:17.982800       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-3890-6156/csi-mockplugin-7787898469\" objectUID=25e07d7c-2f53-40a6-ab0a-a8aefc6863d2 kind=\"ControllerRevision\" virtual=false\nI0917 07:36:17.983018       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-3890-6156/csi-mockplugin-0\" objectUID=6e8b39ac-3f2b-402d-98f8-013d4610e3e6 kind=\"Pod\" virtual=false\nI0917 07:36:17.994922       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-3890-6156/csi-mockplugin-7787898469\" objectUID=25e07d7c-2f53-40a6-ab0a-a8aefc6863d2 kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:36:17.994939       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-3890-6156/csi-mockplugin-0\" objectUID=6e8b39ac-3f2b-402d-98f8-013d4610e3e6 kind=\"Pod\" propagationPolicy=Background\nI0917 07:36:18.190749       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-3890-6156/csi-mockplugin-attacher\nI0917 07:36:18.191020       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-3890-6156/csi-mockplugin-attacher-79bd879f85\" objectUID=4de0ed31-cb02-4e2a-81e6-892f74529b51 kind=\"ControllerRevision\" virtual=false\nI0917 07:36:18.191108       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-3890-6156/csi-mockplugin-attacher-0\" objectUID=71d2eee9-fe9b-4f0d-bab3-c66281620154 kind=\"Pod\" virtual=false\nI0917 07:36:18.193300       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-3890-6156/csi-mockplugin-attacher-79bd879f85\" objectUID=4de0ed31-cb02-4e2a-81e6-892f74529b51 kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:36:18.193438       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-3890-6156/csi-mockplugin-attacher-0\" objectUID=71d2eee9-fe9b-4f0d-bab3-c66281620154 kind=\"Pod\" propagationPolicy=Background\nE0917 07:36:18.545410       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:19.218263       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-3890\nE0917 07:36:19.245155       1 tokens_controller.go:262] error synchronizing serviceaccount fsgroupchangepolicy-1187/default: secrets \"default-token-h8vvq\" is forbidden: unable to create new content in namespace fsgroupchangepolicy-1187 because it is being terminated\nI0917 07:36:19.319029       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:19.319342       1 event.go:291] \"Event occurred\" object=\"volume-expand-3559/pod-6d41e096-058b-491b-884b-0f28c807b073\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\\\" \"\nE0917 07:36:19.410256       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-9292/pvc-fptnt: storageclass.storage.k8s.io \"volume-9292\" not found\nI0917 07:36:19.410502       1 event.go:291] \"Event occurred\" object=\"volume-9292/pvc-fptnt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-9292\\\" not found\"\nI0917 07:36:19.520178       1 pv_controller.go:879] volume \"local-cgfr8\" entered phase \"Available\"\nI0917 07:36:19.546121       1 namespace_controller.go:185] Namespace has been deleted replicaset-4694\nI0917 07:36:19.622840       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-6912/rs\" need=3 creating=1\nI0917 07:36:19.679208       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6912/rs-458kx\" objectUID=8288bb02-3ebc-4305-80bf-d4e8e6854144 kind=\"Pod\" virtual=false\nI0917 07:36:19.679241       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6912/rs-hq6tn\" objectUID=0a02cbbc-d6aa-40ad-b87b-c8860deea821 kind=\"Pod\" virtual=false\nI0917 07:36:19.679252       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6912/rs-mqchq\" objectUID=b5401f12-ffb0-4667-b3a0-97fa0c23d9d0 kind=\"Pod\" virtual=false\nI0917 07:36:19.927225       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/pvc-9j9nf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5165\\\" or manually created by system administrator\"\nE0917 07:36:19.976214       1 tokens_controller.go:262] error synchronizing serviceaccount flexvolume-5164/default: secrets \"default-token-zqdh6\" is forbidden: unable to create new content in namespace flexvolume-5164 because it is being terminated\nI0917 07:36:20.021017       1 pv_controller.go:879] volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" entered phase \"Bound\"\nI0917 07:36:20.021047       1 pv_controller.go:982] volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" bound to claim \"provisioning-5165/pvc-9j9nf\"\nI0917 07:36:20.035189       1 pv_controller.go:823] claim \"provisioning-5165/pvc-9j9nf\" entered phase \"Bound\"\nI0917 07:36:20.059411       1 pv_controller.go:1340] isVolumeReleased[pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05]: volume is released\nI0917 07:36:20.102001       1 pv_controller.go:1340] isVolumeReleased[pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500]: volume is released\nI0917 07:36:20.217837       1 pv_controller_base.go:505] deletion of claim \"topology-1325/pvc-xh8sk\" was already processed\nI0917 07:36:20.247351       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-b90fb0d0-cb29-4b91-8dc8-95a08d21e500\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09a510e110b0280e1\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:36:20.264997       1 pv_controller_base.go:505] deletion of claim \"provisioning-9285/aws5ns7p\" was already processed\nI0917 07:36:20.297136       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-be29c99a-0f07-4465-8d74-bc6f8fce0c05\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e1ee1b2dcf2199f9\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:36:20.887061       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-5104-3793/default: secrets \"default-token-ldm8n\" is forbidden: unable to create new content in namespace ephemeral-5104-3793 because it is being terminated\nI0917 07:36:20.896974       1 namespace_controller.go:185] Namespace has been deleted projected-3500\nI0917 07:36:21.538803       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9349/pvc-tx5sn\"\nI0917 07:36:21.554195       1 pv_controller.go:640] volume \"local-29fhp\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:21.557737       1 pv_controller.go:879] volume \"local-29fhp\" entered phase \"Released\"\nI0917 07:36:21.610383       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^f32eade9-1789-11ec-bc38-9a0e93fd5cd1\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:21.643321       1 pv_controller_base.go:505] deletion of claim \"provisioning-9349/pvc-tx5sn\" was already processed\nE0917 07:36:22.069619       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:22.158745       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^f32eade9-1789-11ec-bc38-9a0e93fd5cd1\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:22.158898       1 event.go:291] \"Event occurred\" object=\"provisioning-5165/hostpath-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\\\" \"\nI0917 07:36:22.803557       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7417\nI0917 07:36:23.239014       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1651^4\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:36:23.245177       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1651^4\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nE0917 07:36:23.340618       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-3890-6156/default: secrets \"default-token-954km\" is forbidden: unable to create new content in namespace csi-mock-volumes-3890-6156 because it is being terminated\nI0917 07:36:23.603322       1 namespace_controller.go:185] Namespace has been deleted services-1474\nI0917 07:36:23.793533       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-d055fc8c-664b-4b48-9818-58e545fd8248\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1651^4\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:36:24.138381       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-1651/pvc-whmlw\" was already processed\nE0917 07:36:24.150621       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0917 07:36:24.297570       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:24.366350       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-1187\nI0917 07:36:25.033574       1 stateful_set.go:440] StatefulSet has been deleted provisioning-1302-6914/csi-hostpathplugin\nI0917 07:36:25.033599       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-1302-6914/csi-hostpathplugin-0\" objectUID=49f0a274-725e-4af1-8f46-d138c11655c7 kind=\"Pod\" virtual=false\nI0917 07:36:25.033574       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-1302-6914/csi-hostpathplugin-7d6c5b5cd\" objectUID=145b9007-d0e8-4655-aae5-fbc476c298ee kind=\"ControllerRevision\" virtual=false\nI0917 07:36:25.036770       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-1302-6914/csi-hostpathplugin-0\" objectUID=49f0a274-725e-4af1-8f46-d138c11655c7 kind=\"Pod\" propagationPolicy=Background\nI0917 07:36:25.039470       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-1302-6914/csi-hostpathplugin-7d6c5b5cd\" objectUID=145b9007-d0e8-4655-aae5-fbc476c298ee kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:36:25.092001       1 namespace_controller.go:185] Namespace has been deleted flexvolume-5164\nE0917 07:36:25.277569       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-3273/default: secrets \"default-token-swgsn\" is forbidden: unable to create new content in namespace volume-expand-3273 because it is being terminated\nI0917 07:36:25.366261       1 event.go:291] \"Event occurred\" object=\"pvc-protection-6650/pvc-protectionbz8gz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:36:25.421923       1 namespace_controller.go:185] Namespace has been deleted provisioning-1302\nI0917 07:36:25.473031       1 event.go:291] \"Event occurred\" object=\"pvc-protection-6650/pvc-protectionbz8gz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:26.118082       1 event.go:291] \"Event occurred\" object=\"pv-4694/pvc-jvgjq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"FailedBinding\" message=\"no persistent volumes available for this claim and no storage class is set\"\nI0917 07:36:26.220083       1 pv_controller.go:879] volume \"nfs-4745r\" entered phase \"Available\"\nE0917 07:36:26.466451       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9285/default: secrets \"default-token-dwqmh\" is forbidden: unable to create new content in namespace provisioning-9285 because it is being terminated\nI0917 07:36:27.073192       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^e8bd503a-1789-11ec-bc38-9a0e93fd5cd1\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:27.079103       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^e8bd503a-1789-11ec-bc38-9a0e93fd5cd1\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:27.135226       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8342/pvc-nlwd7\"\nI0917 07:36:27.140117       1 pv_controller.go:640] volume \"local-lp2m6\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:27.143358       1 pv_controller.go:879] volume \"local-lp2m6\" entered phase \"Released\"\nI0917 07:36:27.234383       1 pv_controller_base.go:505] deletion of claim \"provisioning-8342/pvc-nlwd7\" was already processed\nE0917 07:36:27.490209       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:27.639163       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^e8bd503a-1789-11ec-bc38-9a0e93fd5cd1\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:27.716252       1 event.go:291] \"Event occurred\" object=\"volume-1389-3611/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0917 07:36:27.997334       1 event.go:291] \"Event occurred\" object=\"volume-1389/csi-hostpathgjh29\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-1389\\\" or manually created by system administrator\"\nI0917 07:36:28.543885       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-3890-6156\nI0917 07:36:28.555950       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5603-9821\nI0917 07:36:28.987957       1 pv_controller.go:879] volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" entered phase \"Bound\"\nI0917 07:36:28.988145       1 pv_controller.go:982] volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" bound to claim \"pvc-protection-6650/pvc-protectionbz8gz\"\nI0917 07:36:28.997156       1 pv_controller.go:823] claim \"pvc-protection-6650/pvc-protectionbz8gz\" entered phase \"Bound\"\nI0917 07:36:29.488322       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00f65cf2fda4849b6\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:30.206653       1 pv_controller.go:930] claim \"pv-4694/pvc-jvgjq\" bound to volume \"nfs-4745r\"\nI0917 07:36:30.206964       1 event.go:291] \"Event occurred\" object=\"volume-1389/csi-hostpathgjh29\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-1389\\\" or manually created by system administrator\"\nI0917 07:36:30.216536       1 pv_controller.go:879] volume \"nfs-4745r\" entered phase \"Bound\"\nI0917 07:36:30.216735       1 pv_controller.go:982] volume \"nfs-4745r\" bound to claim \"pv-4694/pvc-jvgjq\"\nI0917 07:36:30.224442       1 pv_controller.go:823] claim \"pv-4694/pvc-jvgjq\" entered phase \"Bound\"\nI0917 07:36:30.225055       1 pv_controller.go:930] claim \"volume-9292/pvc-fptnt\" bound to volume \"local-cgfr8\"\nI0917 07:36:30.225536       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-6/pvc-blrdj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:30.237846       1 pv_controller.go:879] volume \"local-cgfr8\" entered phase \"Bound\"\nI0917 07:36:30.237881       1 pv_controller.go:982] volume \"local-cgfr8\" bound to claim \"volume-9292/pvc-fptnt\"\nI0917 07:36:30.250087       1 pv_controller.go:823] claim \"volume-9292/pvc-fptnt\" entered phase \"Bound\"\nI0917 07:36:30.288160       1 pv_controller.go:879] volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" entered phase \"Bound\"\nI0917 07:36:30.288197       1 pv_controller.go:982] volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" bound to claim \"volume-1389/csi-hostpathgjh29\"\nI0917 07:36:30.306881       1 namespace_controller.go:185] Namespace has been deleted volume-expand-3273\nI0917 07:36:30.308998       1 pv_controller.go:823] claim \"volume-1389/csi-hostpathgjh29\" entered phase \"Bound\"\nI0917 07:36:30.785622       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-4701/nodeport-test\" need=2 creating=2\nI0917 07:36:30.837826       1 event.go:291] \"Event occurred\" object=\"services-4701/nodeport-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: nodeport-test-6wnv6\"\nI0917 07:36:30.911968       1 event.go:291] \"Event occurred\" object=\"services-4701/nodeport-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: nodeport-test-k99rm\"\nI0917 07:36:31.303326       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5104-3793\nI0917 07:36:31.578693       1 namespace_controller.go:185] Namespace has been deleted provisioning-9285\nI0917 07:36:31.795145       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [mygroup.example.com/v1beta1, Resource=foo2rqr4as]\nI0917 07:36:31.795303       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0917 07:36:31.795343       1 shared_informer.go:247] Caches are synced for garbage collector \nI0917 07:36:31.795365       1 garbagecollector.go:254] synced garbage collector\nI0917 07:36:31.857930       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00f65cf2fda4849b6\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:31.858202       1 event.go:291] \"Event occurred\" object=\"pvc-protection-6650/pvc-tester-zqxl2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\\\" \"\nI0917 07:36:32.208355       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-6673/awssp5gh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:36:32.210727       1 pv_controller.go:879] volume \"local-pvrkrz7\" entered phase \"Available\"\nI0917 07:36:32.304699       1 pv_controller.go:930] claim \"persistent-local-volumes-test-132/pvc-24cj6\" bound to volume \"local-pvrkrz7\"\nI0917 07:36:32.313701       1 pv_controller.go:879] volume \"local-pvrkrz7\" entered phase \"Bound\"\nI0917 07:36:32.314313       1 pv_controller.go:982] volume \"local-pvrkrz7\" bound to claim \"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:32.323457       1 pv_controller.go:823] claim \"persistent-local-volumes-test-132/pvc-24cj6\" entered phase \"Bound\"\nI0917 07:36:32.410240       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-6673/awssp5gh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:32.417513       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8962/pod-a655693c-67dc-4354-9332-a33c1b3d9968\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:32.417661       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nE0917 07:36:32.449584       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:32.451749       1 namespace_controller.go:185] Namespace has been deleted topology-1325\nI0917 07:36:32.628258       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1389^f955a402-1789-11ec-b010-46b444ec8fcc\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nE0917 07:36:32.796771       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-9775/pvc-7h85w: storageclass.storage.k8s.io \"provisioning-9775\" not found\nI0917 07:36:32.797421       1 event.go:291] \"Event occurred\" object=\"provisioning-9775/pvc-7h85w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9775\\\" not found\"\nI0917 07:36:32.808370       1 namespace_controller.go:185] Namespace has been deleted provisioning-9349\nI0917 07:36:32.907011       1 pv_controller.go:879] volume \"local-2jhwp\" entered phase \"Available\"\nI0917 07:36:33.004632       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-5953/test-orphan-deployment\"\nI0917 07:36:33.178955       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1389^f955a402-1789-11ec-b010-46b444ec8fcc\") from node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:33.179095       1 event.go:291] \"Event occurred\" object=\"volume-1389/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\\\" \"\nE0917 07:36:33.478905       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:33.520650       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2969/awsmsvj2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:36:33.596703       1 namespace_controller.go:185] Namespace has been deleted emptydir-893\nI0917 07:36:33.737738       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2969/awsmsvj2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:33.738234       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2969/awsmsvj2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:34.355741       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-87/aws8s6v4\"\nI0917 07:36:34.363207       1 pv_controller.go:640] volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:34.366497       1 pv_controller.go:879] volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" entered phase \"Released\"\nI0917 07:36:34.368278       1 pv_controller.go:1340] isVolumeReleased[pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6]: volume is released\nI0917 07:36:34.557133       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:34.753980       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-594f677956\" objectUID=53378894-048e-439c-8185-e3ceb1de632a kind=\"ControllerRevision\" virtual=false\nI0917 07:36:34.754311       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1651-227/csi-mockplugin\nI0917 07:36:34.754462       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-0\" objectUID=a6d068f8-62c5-49ab-9a76-794c5a93717a kind=\"Pod\" virtual=false\nI0917 07:36:34.768849       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-594f677956\" objectUID=53378894-048e-439c-8185-e3ceb1de632a kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:36:34.769016       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-0\" objectUID=a6d068f8-62c5-49ab-9a76-794c5a93717a kind=\"Pod\" propagationPolicy=Background\nI0917 07:36:34.975583       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1651-227/csi-mockplugin-attacher\nI0917 07:36:34.975594       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-attacher-58f988c4c8\" objectUID=4e28fb66-0503-4bc8-a63e-214864ecfcec kind=\"ControllerRevision\" virtual=false\nI0917 07:36:34.975623       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-attacher-0\" objectUID=689c6788-a944-4653-a031-7c04327241ac kind=\"Pod\" virtual=false\nI0917 07:36:34.978328       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-attacher-58f988c4c8\" objectUID=4e28fb66-0503-4bc8-a63e-214864ecfcec kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:36:34.978494       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1651-227/csi-mockplugin-attacher-0\" objectUID=689c6788-a944-4653-a031-7c04327241ac kind=\"Pod\" propagationPolicy=Background\nE0917 07:36:34.981839       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0917 07:36:35.105742       1 csi_attacher.go:711] kubernetes.io/csi: attachment for vol-0ee87200b524457f6 failed: rpc error: code = Internal desc = Could not attach volume \"vol-0ee87200b524457f6\" to node \"i-08e49c3e403a3ad35\": could not attach volume \"vol-0ee87200b524457f6\" to node \"i-08e49c3e403a3ad35\": IncorrectState: vol-0ee87200b524457f6 is not 'available'.\n\tstatus code: 400, request id: 91c2e8cf-81d3-4e1d-85bb-602144e7a52b\nE0917 07:36:35.105843       1 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6 podName: nodeName:}\" failed. No retries permitted until 2021-09-17 07:36:35.605822874 +0000 UTC m=+1098.941554137 (durationBeforeRetry 500ms). Error: AttachVolume.Attach failed for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" : rpc error: code = Internal desc = Could not attach volume \"vol-0ee87200b524457f6\" to node \"i-08e49c3e403a3ad35\": could not attach volume \"vol-0ee87200b524457f6\" to node \"i-08e49c3e403a3ad35\": IncorrectState: vol-0ee87200b524457f6 is not 'available'.\n\tstatus code: 400, request id: 91c2e8cf-81d3-4e1d-85bb-602144e7a52b\nI0917 07:36:35.105904       1 event.go:291] \"Event occurred\" object=\"volume-2408/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"AttachVolume.Attach failed for volume \\\"ebs.csi.aws.com-vol-0ee87200b524457f6\\\" : rpc error: code = Internal desc = Could not attach volume \\\"vol-0ee87200b524457f6\\\" to node \\\"i-08e49c3e403a3ad35\\\": could not attach volume \\\"vol-0ee87200b524457f6\\\" to node \\\"i-08e49c3e403a3ad35\\\": IncorrectState: vol-0ee87200b524457f6 is not 'available'.\\n\\tstatus code: 400, request id: 91c2e8cf-81d3-4e1d-85bb-602144e7a52b\"\nI0917 07:36:35.474791       1 namespace_controller.go:185] Namespace has been deleted provisioning-1302-6914\nI0917 07:36:35.666452       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:35.852501       1 pv_controller.go:879] volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" entered phase \"Bound\"\nI0917 07:36:35.852534       1 pv_controller.go:982] volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" bound to claim \"fsgroupchangepolicy-6673/awssp5gh\"\nI0917 07:36:35.860433       1 pv_controller.go:823] claim \"fsgroupchangepolicy-6673/awssp5gh\" entered phase \"Bound\"\nI0917 07:36:36.123012       1 event.go:291] \"Event occurred\" object=\"volume-expand-3559/awst8tx2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalExpanding\" message=\"CSI migration enabled for kubernetes.io/aws-ebs; waiting for external resizer to expand the pvc\"\nI0917 07:36:36.138778       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1651\nE0917 07:36:36.220254       1 csi_attacher.go:711] kubernetes.io/csi: attachment for vol-0ee87200b524457f6 failed: rpc error: code = Internal desc = Could not attach volume \"vol-0ee87200b524457f6\" to node \"i-08e49c3e403a3ad35\": could not attach volume \"vol-0ee87200b524457f6\" to node \"i-08e49c3e403a3ad35\": IncorrectState: vol-0ee87200b524457f6 is not 'available'.\n\tstatus code: 400, request id: df95a68e-8f16-4ee1-83ca-d1ff2b323432\nI0917 07:36:36.220281       1 actual_state_of_world.go:350] Volume \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\" is already added to attachedVolume list to node \"ip-172-20-33-78.eu-west-2.compute.internal\", update device path \"\"\nE0917 07:36:36.220394       1 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6 podName: nodeName:}\" failed. No retries permitted until 2021-09-17 07:36:37.220372272 +0000 UTC m=+1100.556103532 (durationBeforeRetry 1s). Error: AttachVolume.Attach failed for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" : rpc error: code = Internal desc = Could not attach volume \"vol-0ee87200b524457f6\" to node \"i-08e49c3e403a3ad35\": could not attach volume \"vol-0ee87200b524457f6\" to node \"i-08e49c3e403a3ad35\": IncorrectState: vol-0ee87200b524457f6 is not 'available'.\n\tstatus code: 400, request id: df95a68e-8f16-4ee1-83ca-d1ff2b323432\nI0917 07:36:36.220671       1 event.go:291] \"Event occurred\" object=\"volume-2408/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"AttachVolume.Attach failed for volume \\\"ebs.csi.aws.com-vol-0ee87200b524457f6\\\" : rpc error: code = Internal desc = Could not attach volume \\\"vol-0ee87200b524457f6\\\" to node \\\"i-08e49c3e403a3ad35\\\": could not attach volume \\\"vol-0ee87200b524457f6\\\" to node \\\"i-08e49c3e403a3ad35\\\": IncorrectState: vol-0ee87200b524457f6 is not 'available'.\\n\\tstatus code: 400, request id: df95a68e-8f16-4ee1-83ca-d1ff2b323432\"\nI0917 07:36:36.472282       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00d2941494215f317\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:36:36.618583       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-8947/default: secrets \"default-token-z4bwf\" is forbidden: unable to create new content in namespace configmap-8947 because it is being terminated\nI0917 07:36:37.084117       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:37.088700       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:37.097433       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d40b921d2b3fa40b\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:37.105310       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d40b921d2b3fa40b\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:37.113580       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^f32eade9-1789-11ec-bc38-9a0e93fd5cd1\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:37.116453       1 pv_controller.go:879] volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" entered phase \"Bound\"\nI0917 07:36:37.116623       1 pv_controller.go:982] volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" bound to claim \"fsgroupchangepolicy-2969/awsmsvj2\"\nI0917 07:36:37.121565       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^f32eade9-1789-11ec-bc38-9a0e93fd5cd1\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:37.126876       1 pv_controller.go:823] claim \"fsgroupchangepolicy-2969/awsmsvj2\" entered phase \"Bound\"\nI0917 07:36:37.316161       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nE0917 07:36:37.551077       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-7129/default: secrets \"default-token-49jft\" is forbidden: unable to create new content in namespace kubectl-7129 because it is being terminated\nI0917 07:36:37.680134       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5165/pvc-9j9nf\"\nI0917 07:36:37.685437       1 pv_controller.go:640] volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:37.688562       1 pv_controller.go:879] volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" entered phase \"Released\"\nI0917 07:36:37.690185       1 pv_controller.go:1340] isVolumeReleased[pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901]: volume is released\nI0917 07:36:37.694337       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6a20ab54-abbd-4bcd-a7d7-583cd5d55901\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5165^f32eade9-1789-11ec-bc38-9a0e93fd5cd1\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:36:37.758106       1 pv_controller_base.go:505] deletion of claim \"provisioning-5165/pvc-9j9nf\" was already processed\nI0917 07:36:37.818844       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05297dadf6ba71de2\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:37.861673       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:37.861710       1 actual_state_of_world.go:350] Volume \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\" is already added to attachedVolume list to node \"ip-172-20-33-78.eu-west-2.compute.internal\", update device path \"\"\nI0917 07:36:37.862007       1 event.go:291] \"Event occurred\" object=\"volume-2408/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"ebs.csi.aws.com-vol-0ee87200b524457f6\\\" \"\nI0917 07:36:37.974053       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5165/pvc-nlftn\"\nE0917 07:36:37.979416       1 pvc_protection_controller.go:215] \"Error removing protection finalizer from PVC\" err=\"Operation cannot be fulfilled on persistentvolumeclaims \\\"pvc-nlftn\\\": the object has been modified; please apply your changes to the latest version and try again\" PVC=\"provisioning-5165/pvc-nlftn\"\nE0917 07:36:37.979451       1 pvc_protection_controller.go:149] PVC provisioning-5165/pvc-nlftn failed with : Operation cannot be fulfilled on persistentvolumeclaims \"pvc-nlftn\": the object has been modified; please apply your changes to the latest version and try again\nI0917 07:36:37.981655       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5165/pvc-nlftn\"\nI0917 07:36:37.987741       1 pv_controller.go:640] volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:37.990027       1 pv_controller.go:879] volume \"pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28\" entered phase \"Released\"\nI0917 07:36:37.992766       1 pv_controller.go:1340] isVolumeReleased[pvc-72c507b8-3ed5-4c5f-939d-62c60e28be28]: volume is released\nI0917 07:36:38.035541       1 pv_controller_base.go:505] deletion of claim \"provisioning-5165/pvc-nlftn\" was already processed\nI0917 07:36:38.126688       1 namespace_controller.go:185] Namespace has been deleted provisioning-2759\nE0917 07:36:38.203990       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-9067/default: secrets \"default-token-97xjw\" is forbidden: unable to create new content in namespace security-context-9067 because it is being terminated\nI0917 07:36:38.250674       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00d2941494215f317\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:38.250924       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-6673/pod-7cefc9eb-7289-42a7-9e35-b51c066c7ac3\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\\\" \"\nI0917 07:36:38.434051       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8962/pod-a655693c-67dc-4354-9332-a33c1b3d9968\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:38.434530       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:38.472451       1 namespace_controller.go:185] Namespace has been deleted provisioning-8342\nE0917 07:36:38.583136       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-8962/default: secrets \"default-token-gq7ph\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-8962 because it is being terminated\nI0917 07:36:38.633064       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8962/pod-a655693c-67dc-4354-9332-a33c1b3d9968\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:38.633091       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:38.637388       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8962/pod-2084b3f9-5aba-4bb8-ab62-006f9780b826\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:38.637412       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:39.208447       1 namespace_controller.go:185] Namespace has been deleted sysctl-6116\nI0917 07:36:39.231993       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8962/pod-2084b3f9-5aba-4bb8-ab62-006f9780b826\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:39.232218       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:39.431860       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8962/pod-2084b3f9-5aba-4bb8-ab62-006f9780b826\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:39.431988       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:39.437193       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-8962/pvc-m8r9t\"\nI0917 07:36:39.444704       1 pv_controller.go:640] volume \"local-pvgpp78\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:39.448937       1 pv_controller.go:879] volume \"local-pvgpp78\" entered phase \"Released\"\nI0917 07:36:39.455128       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-8962/pvc-m8r9t\" was already processed\nI0917 07:36:39.544732       1 namespace_controller.go:185] Namespace has been deleted pods-246\nI0917 07:36:39.880233       1 namespace_controller.go:185] Namespace has been deleted nettest-7827\nE0917 07:36:40.388311       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:41.372371       1 namespace_controller.go:185] Namespace has been deleted emptydir-6615\nI0917 07:36:41.818022       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-4694/pvc-jvgjq\"\nI0917 07:36:41.824555       1 pv_controller.go:640] volume \"nfs-4745r\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:41.827397       1 pv_controller.go:879] volume \"nfs-4745r\" entered phase \"Released\"\nI0917 07:36:42.214157       1 pv_controller_base.go:505] deletion of claim \"pv-4694/pvc-jvgjq\" was already processed\nI0917 07:36:42.581592       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-7346\nI0917 07:36:42.647268       1 namespace_controller.go:185] Namespace has been deleted kubectl-7129\nI0917 07:36:42.947110       1 namespace_controller.go:185] Namespace has been deleted projected-5055\nI0917 07:36:43.283239       1 namespace_controller.go:185] Namespace has been deleted security-context-9067\nI0917 07:36:43.420307       1 namespace_controller.go:185] Namespace has been deleted dns-autoscaling-8069\nI0917 07:36:43.466787       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-4095\nE0917 07:36:43.844032       1 pv_controller.go:1451] error finding provisioning plugin for claim volumemode-2931/pvc-9kd87: storageclass.storage.k8s.io \"volumemode-2931\" not found\nI0917 07:36:43.844145       1 event.go:291] \"Event occurred\" object=\"volumemode-2931/pvc-9kd87\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-2931\\\" not found\"\nI0917 07:36:43.944236       1 pv_controller.go:879] volume \"local-pp4rc\" entered phase \"Available\"\nI0917 07:36:43.984095       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:43.992584       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nE0917 07:36:44.349262       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:36:44.488068       1 namespace_controller.go:185] Namespace has been deleted topology-5220\nI0917 07:36:45.206820       1 pv_controller.go:930] claim \"provisioning-9775/pvc-7h85w\" bound to volume \"local-2jhwp\"\nI0917 07:36:45.209888       1 pv_controller.go:1340] isVolumeReleased[pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6]: volume is released\nI0917 07:36:45.216312       1 pv_controller.go:879] volume \"local-2jhwp\" entered phase \"Bound\"\nI0917 07:36:45.216486       1 pv_controller.go:982] volume \"local-2jhwp\" bound to claim \"provisioning-9775/pvc-7h85w\"\nI0917 07:36:45.220777       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1651-227\nI0917 07:36:45.221977       1 pv_controller.go:823] claim \"provisioning-9775/pvc-7h85w\" entered phase \"Bound\"\nI0917 07:36:45.222361       1 pv_controller.go:930] claim \"volumemode-2931/pvc-9kd87\" bound to volume \"local-pp4rc\"\nI0917 07:36:45.222647       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-6/pvc-blrdj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:36:45.229944       1 pv_controller.go:879] volume \"local-pp4rc\" entered phase \"Bound\"\nI0917 07:36:45.230111       1 pv_controller.go:982] volume \"local-pp4rc\" bound to claim \"volumemode-2931/pvc-9kd87\"\nI0917 07:36:45.236843       1 pv_controller.go:823] claim \"volumemode-2931/pvc-9kd87\" entered phase \"Bound\"\nI0917 07:36:45.683458       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05297dadf6ba71de2\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:36:45.683598       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2969/pod-7c598755-5d92-436b-b32e-2340ead4918c\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\\\" \"\nE0917 07:36:46.046467       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nE0917 07:36:46.146642       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nE0917 07:36:46.266289       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nI0917 07:36:46.356941       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:46.357357       1 event.go:291] \"Event occurred\" object=\"volume-expand-3559/pod-3fa9646c-f61c-4002-bbc9-59c0c0ffe87e\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\\\" \"\nE0917 07:36:46.387172       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nE0917 07:36:46.545267       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nE0917 07:36:46.720614       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nI0917 07:36:46.849294       1 namespace_controller.go:185] Namespace has been deleted configmap-8947\nE0917 07:36:46.971994       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nE0917 07:36:47.408730       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nI0917 07:36:47.851517       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-9292/pvc-fptnt\"\nI0917 07:36:47.852968       1 namespace_controller.go:185] Namespace has been deleted volume-7115\nI0917 07:36:47.857328       1 pv_controller.go:640] volume \"local-cgfr8\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:47.859570       1 pv_controller.go:879] volume \"local-cgfr8\" entered phase \"Released\"\nI0917 07:36:47.953695       1 pv_controller_base.go:505] deletion of claim \"volume-9292/pvc-fptnt\" was already processed\nI0917 07:36:48.020274       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5165-5566/csi-hostpathplugin-0\" objectUID=e9ccad6f-d122-4ac2-ad1e-5f8a22619522 kind=\"Pod\" virtual=false\nI0917 07:36:48.020359       1 stateful_set.go:440] StatefulSet has been deleted provisioning-5165-5566/csi-hostpathplugin\nI0917 07:36:48.020577       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5165-5566/csi-hostpathplugin-6c79fdfb88\" objectUID=7507f07a-47c9-4785-81ca-d529cdf7fc86 kind=\"ControllerRevision\" virtual=false\nI0917 07:36:48.043958       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5165-5566/csi-hostpathplugin-0\" objectUID=e9ccad6f-d122-4ac2-ad1e-5f8a22619522 kind=\"Pod\" propagationPolicy=Background\nI0917 07:36:48.044547       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5165-5566/csi-hostpathplugin-6c79fdfb88\" objectUID=7507f07a-47c9-4785-81ca-d529cdf7fc86 kind=\"ControllerRevision\" propagationPolicy=Background\nE0917 07:36:48.175817       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nI0917 07:36:48.350386       1 namespace_controller.go:185] Namespace has been deleted provisioning-5165\nE0917 07:36:48.616315       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nE0917 07:36:48.707588       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nI0917 07:36:48.790522       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8962\nE0917 07:36:48.823334       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nE0917 07:36:48.945237       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nE0917 07:36:49.078949       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nE0917 07:36:49.259108       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nE0917 07:36:49.583689       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nE0917 07:36:49.618325       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nI0917 07:36:49.748605       1 pv_controller.go:1340] isVolumeReleased[pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6]: volume is released\nI0917 07:36:49.880056       1 pv_controller_base.go:505] deletion of claim \"provisioning-87/aws8s6v4\" was already processed\nE0917 07:36:50.022157       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nI0917 07:36:50.202673       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-2d38dc86-89e2-409f-a7a7-7b19d65e86b6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d40b921d2b3fa40b\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nE0917 07:36:50.834993       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nI0917 07:36:51.068311       1 namespace_controller.go:185] Namespace has been deleted var-expansion-454\nI0917 07:36:51.333394       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-132/pod-36772025-c4ff-4487-a8be-16f560a61eec\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:51.338783       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nE0917 07:36:51.468003       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-4879/default: secrets \"default-token-42rfd\" is forbidden: unable to create new content in namespace disruption-4879 because it is being terminated\nI0917 07:36:51.696843       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9775/pvc-7h85w\"\nI0917 07:36:51.701805       1 pv_controller.go:640] volume \"local-2jhwp\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:51.704938       1 pv_controller.go:879] volume \"local-2jhwp\" entered phase \"Released\"\nI0917 07:36:51.798523       1 pv_controller_base.go:505] deletion of claim \"provisioning-9775/pvc-7h85w\" was already processed\nI0917 07:36:52.047264       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"pvc-protection-6650/pvc-tester-zqxl2\" PVC=\"pvc-protection-6650/pvc-protectionbz8gz\"\nI0917 07:36:52.047291       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6650/pvc-protectionbz8gz\"\nE0917 07:36:52.284889       1 namespace_controller.go:162] deletion of namespace containers-189 failed: unexpected items still remain in namespace: containers-189 for gvr: /v1, Resource=pods\nE0917 07:36:52.339811       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nE0917 07:36:53.064942       1 namespace_controller.go:162] deletion of namespace apply-7520 failed: unexpected items still remain in namespace: apply-7520 for gvr: /v1, Resource=pods\nI0917 07:36:53.433992       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-132/pod-502a4e8c-45df-417f-86f7-4626f74f8bb5\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:53.434086       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:53.636172       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-132/pod-502a4e8c-45df-417f-86f7-4626f74f8bb5\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:53.636386       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:53.639232       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-132/pod-502a4e8c-45df-417f-86f7-4626f74f8bb5\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:53.639317       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:54.238635       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-132/pod-502a4e8c-45df-417f-86f7-4626f74f8bb5\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:54.239782       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nE0917 07:36:54.412238       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-629/default: secrets \"default-token-v7d7n\" is forbidden: unable to create new content in namespace kubectl-629 because it is being terminated\nI0917 07:36:54.436027       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-132/pod-502a4e8c-45df-417f-86f7-4626f74f8bb5\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:54.436317       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:54.447063       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-132/pvc-24cj6\"\nI0917 07:36:54.470648       1 pv_controller.go:640] volume \"local-pvrkrz7\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:36:54.481559       1 pv_controller.go:879] volume \"local-pvrkrz7\" entered phase \"Released\"\nI0917 07:36:54.488724       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-132/pvc-24cj6\" was already processed\nI0917 07:36:54.527944       1 controller_ref_manager.go:232] patching pod replicaset-4359_pod-adoption-release to remove its controllerRef to apps/v1/ReplicaSet:pod-adoption-release\nI0917 07:36:54.538778       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-4359/pod-adoption-release\" objectUID=a7654aec-c7ac-4562-b1e0-df5c3cedcc73 kind=\"ReplicaSet\" virtual=false\nI0917 07:36:54.542391       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4359/pod-adoption-release\" need=1 creating=1\nI0917 07:36:54.547043       1 garbagecollector.go:510] object [apps/v1/ReplicaSet, namespace: replicaset-4359, name: pod-adoption-release, uid: a7654aec-c7ac-4562-b1e0-df5c3cedcc73]'s doesn't have an owner, continue on next item\nI0917 07:36:54.562854       1 event.go:291] \"Event occurred\" object=\"replicaset-4359/pod-adoption-release\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-adoption-release-p7xgx\"\nI0917 07:36:55.354901       1 namespace_controller.go:185] Namespace has been deleted apply-8919\nI0917 07:36:55.633189       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"pvc-protection-6650/pvc-tester-zqxl2\" PVC=\"pvc-protection-6650/pvc-protectionbz8gz\"\nI0917 07:36:55.633215       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6650/pvc-protectionbz8gz\"\nI0917 07:36:55.635588       1 event.go:291] \"Event occurred\" object=\"provisioning-659-3916/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0917 07:36:55.833083       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"pvc-protection-6650/pvc-tester-zqxl2\" PVC=\"pvc-protection-6650/pvc-protectionbz8gz\"\nI0917 07:36:55.833128       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"pvc-protection-6650/pvc-protectionbz8gz\"\nI0917 07:36:55.839866       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pvc-protection-6650/pvc-protectionbz8gz\"\nI0917 07:36:55.862913       1 pv_controller.go:640] volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:36:55.873668       1 pv_controller.go:879] volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" entered phase \"Released\"\nI0917 07:36:55.876837       1 pv_controller.go:1340] isVolumeReleased[pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e]: volume is released\nI0917 07:36:55.922258       1 event.go:291] \"Event occurred\" object=\"provisioning-659/csi-hostpath4q2tg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-659\\\" or manually created by system administrator\"\nI0917 07:36:55.922425       1 event.go:291] \"Event occurred\" object=\"provisioning-659/csi-hostpath4q2tg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-659\\\" or manually created by system administrator\"\nE0917 07:36:57.619180       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nE0917 07:36:57.947983       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9775/default: secrets \"default-token-gw46d\" is forbidden: unable to create new content in namespace provisioning-9775 because it is being terminated\nI0917 07:36:58.451338       1 namespace_controller.go:185] Namespace has been deleted provisioning-5165-5566\nI0917 07:36:58.546903       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-7607/externalsvc\" need=2 creating=2\nI0917 07:36:58.554031       1 event.go:291] \"Event occurred\" object=\"services-7607/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-qmfd5\"\nI0917 07:36:58.560193       1 event.go:291] \"Event occurred\" object=\"services-7607/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-bg9mx\"\nI0917 07:36:58.652891       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" need=1 creating=1\nI0917 07:36:58.653232       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 1\"\nI0917 07:36:58.671519       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-58wkn\"\nI0917 07:36:58.672757       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:36:58.688415       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:36:58.788102       1 pv_controller.go:879] volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" entered phase \"Bound\"\nI0917 07:36:58.788194       1 pv_controller.go:982] volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" bound to claim \"provisioning-659/csi-hostpath4q2tg\"\nI0917 07:36:58.801168       1 pv_controller.go:823] claim \"provisioning-659/csi-hostpath4q2tg\" entered phase \"Bound\"\nI0917 07:36:58.933664       1 garbagecollector.go:471] \"Processing object\" object=\"services-4701/nodeport-test-6wnv6\" objectUID=05559107-dca0-4236-92fc-dcb9bc195dad kind=\"Pod\" virtual=false\nI0917 07:36:58.934034       1 garbagecollector.go:471] \"Processing object\" object=\"services-4701/nodeport-test-k99rm\" objectUID=b0e98f96-cb6d-4dd8-8fcc-1ec65852b2c1 kind=\"Pod\" virtual=false\nI0917 07:36:58.936775       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4701/nodeport-test-k99rm\" objectUID=b0e98f96-cb6d-4dd8-8fcc-1ec65852b2c1 kind=\"Pod\" propagationPolicy=Background\nI0917 07:36:58.937744       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4701/nodeport-test-6wnv6\" objectUID=05559107-dca0-4236-92fc-dcb9bc195dad kind=\"Pod\" propagationPolicy=Background\nE0917 07:36:59.022463       1 tokens_controller.go:262] error synchronizing serviceaccount services-4701/default: secrets \"default-token-68h7k\" is forbidden: unable to create new content in namespace services-4701 because it is being terminated\nE0917 07:36:59.054171       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nE0917 07:36:59.366662       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nI0917 07:36:59.400727       1 namespace_controller.go:185] Namespace has been deleted pv-4694\nI0917 07:36:59.525704       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-864fb64577\" need=2 deleting=1\nI0917 07:36:59.526067       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0917 07:36:59.526219       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-505/test-rolling-update-with-lb-864fb64577-hlqbq\"\nI0917 07:36:59.526918       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 2\"\nI0917 07:36:59.537947       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" need=2 creating=1\nI0917 07:36:59.541486       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 2\"\nW0917 07:36:59.549046       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-505/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0917 07:36:59.550181       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-hlqbq\"\nI0917 07:36:59.550205       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-9dpbj\"\nI0917 07:36:59.579265       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0917 07:36:59.584110       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nI0917 07:36:59.634369       1 namespace_controller.go:185] Namespace has been deleted kubectl-629\nI0917 07:36:59.882996       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00f65cf2fda4849b6\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:59.888365       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00f65cf2fda4849b6\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:36:59.903967       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4359/pod-adoption-release\" need=1 creating=1\nI0917 07:36:59.964256       1 namespace_controller.go:185] Namespace has been deleted containers-189\nI0917 07:37:00.116848       1 job_controller.go:406] enqueueing job cronjob-3465/replace-27197737\nI0917 07:37:00.118367       1 event.go:291] \"Event occurred\" object=\"cronjob-3465/replace\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job replace-27197737\"\nI0917 07:37:00.143845       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-3465/replace\" resourceVersion=\"38194\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"replace\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0917 07:37:00.143867       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-3465/replace, requeuing: Operation cannot be fulfilled on cronjobs.batch \"replace\": the object has been modified; please apply your changes to the latest version and try again\nI0917 07:37:00.144341       1 job_controller.go:406] enqueueing job cronjob-3465/replace-27197737\nI0917 07:37:00.144497       1 event.go:291] \"Event occurred\" object=\"cronjob-3465/replace-27197737\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: replace-27197737--1-25f6v\"\nI0917 07:37:00.175422       1 job_controller.go:406] enqueueing job cronjob-3465/replace-27197737\nI0917 07:37:00.175463       1 job_controller.go:406] enqueueing job cronjob-3465/replace-27197737\nI0917 07:37:00.209594       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-6/pvc-blrdj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:37:00.228495       1 pv_controller.go:1340] isVolumeReleased[pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e]: volume is released\nE0917 07:37:00.252826       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nI0917 07:37:00.318539       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:00.325754       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:00.476966       1 namespace_controller.go:185] Namespace has been deleted volume-9292\nI0917 07:37:00.727038       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-659^0a4e555e-178a-11ec-b13b-0a3c66f3c28b\") from node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nW0917 07:37:00.830850       1 reconciler.go:335] Multi-Attach error for volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05297dadf6ba71de2\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" Volume is already exclusively attached to node ip-172-20-60-186.eu-west-2.compute.internal and can't be attached to another\nI0917 07:37:00.831026       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2969/pod-0dff4a4c-a0f4-4ba0-bc89-0a27e0b3893f\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nE0917 07:37:00.908178       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-87/default: secrets \"default-token-g2c8j\" is forbidden: unable to create new content in namespace provisioning-87 because it is being terminated\nE0917 07:37:00.996391       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nI0917 07:37:00.998766       1 namespace_controller.go:185] Namespace has been deleted kubectl-1209\nE0917 07:37:01.156908       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:01.214575       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-902-crds crd-publish-openapi-test-multi-ver.example.com/v3, Resource=e2e-test-crd-publish-openapi-6303-crds], removed: []\nI0917 07:37:01.214798       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-902-crds.crd-publish-openapi-test-common-group.example.com\nI0917 07:37:01.214889       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-6303-crds.crd-publish-openapi-test-multi-ver.example.com\nI0917 07:37:01.214969       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nE0917 07:37:01.260389       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nI0917 07:37:01.316531       1 shared_informer.go:247] Caches are synced for resource quota \nI0917 07:37:01.316686       1 resource_quota_controller.go:454] synced quota controller\nI0917 07:37:01.341959       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-659^0a4e555e-178a-11ec-b13b-0a3c66f3c28b\") from node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:37:01.342050       1 event.go:291] \"Event occurred\" object=\"provisioning-659/pod-subpath-test-dynamicpv-497s\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\\\" \"\nE0917 07:37:01.486965       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-1202/default: secrets \"default-token-qphz9\" is forbidden: unable to create new content in namespace configmap-1202 because it is being terminated\nE0917 07:37:01.584350       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nI0917 07:37:01.684235       1 namespace_controller.go:185] Namespace has been deleted disruption-4879\nI0917 07:37:01.744056       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-2931/pvc-9kd87\"\nI0917 07:37:01.762656       1 pv_controller.go:640] volume \"local-pp4rc\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:37:01.767847       1 pv_controller.go:879] volume \"local-pp4rc\" entered phase \"Released\"\nI0917 07:37:01.774208       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5320/pvc-7677p\"\nI0917 07:37:01.782424       1 pv_controller.go:640] volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:37:01.786555       1 pv_controller.go:879] volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" entered phase \"Released\"\nI0917 07:37:01.796635       1 pv_controller.go:1340] isVolumeReleased[pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250]: volume is released\nI0917 07:37:01.806429       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-902-crds crd-publish-openapi-test-multi-ver.example.com/v3, Resource=e2e-test-crd-publish-openapi-6303-crds], removed: []\nI0917 07:37:01.819049       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0917 07:37:01.819259       1 shared_informer.go:247] Caches are synced for garbage collector \nI0917 07:37:01.819303       1 garbagecollector.go:254] synced garbage collector\nI0917 07:37:01.858526       1 pv_controller_base.go:505] deletion of claim \"volumemode-2931/pvc-9kd87\" was already processed\nI0917 07:37:02.000487       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-8044/sample-webhook-deployment\"\nE0917 07:37:02.083350       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nI0917 07:37:02.598704       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-132\nI0917 07:37:02.773682       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-864fb64577\" need=1 deleting=1\nI0917 07:37:02.773715       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0917 07:37:02.773778       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-505/test-rolling-update-with-lb-864fb64577-fffqk\"\nI0917 07:37:02.773955       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 1\"\nI0917 07:37:02.799808       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-fffqk\"\nW0917 07:37:02.805638       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-505/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0917 07:37:02.806136       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" need=3 creating=1\nI0917 07:37:02.806478       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 3\"\nI0917 07:37:02.821284       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:02.824325       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-zksgj\"\nE0917 07:37:02.885090       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:02.955867       1 namespace_controller.go:185] Namespace has been deleted provisioning-9775\nE0917 07:37:03.036323       1 namespace_controller.go:162] deletion of namespace services-4701 failed: unexpected items still remain in namespace: services-4701 for gvr: /v1, Resource=pods\nE0917 07:37:03.781354       1 resource_quota_controller.go:253] Operation cannot be fulfilled on resourcequotas \"quota-not-terminating\": the object has been modified; please apply your changes to the latest version and try again\nI0917 07:37:04.547230       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-864fb64577\" need=0 deleting=1\nI0917 07:37:04.547395       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-864fb64577]\nI0917 07:37:04.547544       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-505/test-rolling-update-with-lb-864fb64577-djdbc\"\nI0917 07:37:04.551512       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 0\"\nI0917 07:37:04.628697       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-djdbc\"\nI0917 07:37:04.656158       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:04.814548       1 event.go:291] \"Event occurred\" object=\"resourcequota-9228/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0917 07:37:05.145560       1 tokens_controller.go:262] error synchronizing serviceaccount e2e-kubelet-etc-hosts-8015/default: secrets \"default-token-pcz9p\" is forbidden: unable to create new content in namespace e2e-kubelet-etc-hosts-8015 because it is being terminated\nI0917 07:37:05.360524       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" need=1 creating=1\nI0917 07:37:05.360861       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 1\"\nI0917 07:37:05.366929       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-dn5jc\"\nI0917 07:37:05.378821       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:05.393196       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nW0917 07:37:05.499002       1 reconciler.go:335] Multi-Attach error for volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00d2941494215f317\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" Volume is already exclusively attached to node ip-172-20-60-186.eu-west-2.compute.internal and can't be attached to another\nI0917 07:37:05.499168       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-6673/pod-654b0bdc-9c52-46ea-85fa-902c527262a5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI0917 07:37:05.570614       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-1389/csi-hostpathgjh29\"\nI0917 07:37:05.587168       1 pv_controller.go:640] volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:37:05.591506       1 pv_controller.go:879] volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" entered phase \"Released\"\nI0917 07:37:05.593249       1 pv_controller.go:1340] isVolumeReleased[pvc-65d420e7-380c-4692-a6f9-ada48c9b9335]: volume is released\nI0917 07:37:05.634971       1 pv_controller_base.go:505] deletion of claim \"volume-1389/csi-hostpathgjh29\" was already processed\nE0917 07:37:05.830064       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:06.003811       1 namespace_controller.go:185] Namespace has been deleted provisioning-87\nI0917 07:37:06.180954       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-8914\nI0917 07:37:06.587106       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 2\"\nI0917 07:37:06.587415       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" need=2 deleting=1\nI0917 07:37:06.587547       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4 test-rolling-update-with-lb-864fb64577]\nI0917 07:37:06.587734       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-505/test-rolling-update-with-lb-5ff6986c95-58wkn\"\nI0917 07:37:06.592206       1 namespace_controller.go:185] Namespace has been deleted configmap-1202\nI0917 07:37:06.606292       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" need=2 creating=1\nI0917 07:37:06.613371       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 2\"\nI0917 07:37:06.619461       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-2kvd4\"\nI0917 07:37:06.620245       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-58wkn\"\nI0917 07:37:06.626939       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0917 07:37:06.916058       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:06.996093       1 namespace_controller.go:185] Namespace has been deleted pvc-protection-6650\nI0917 07:37:07.010092       1 event.go:291] \"Event occurred\" object=\"resourcequota-9228/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:37:07.021766       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"resourcequota-9228/test-claim\"\nI0917 07:37:07.047493       1 job_controller.go:406] enqueueing job cronjob-3465/replace-27197737\nI0917 07:37:07.151889       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05297dadf6ba71de2\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:07.171103       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05297dadf6ba71de2\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:07.174188       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1389^f955a402-1789-11ec-b010-46b444ec8fcc\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:37:07.181175       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1389^f955a402-1789-11ec-b010-46b444ec8fcc\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:37:07.186961       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00d2941494215f317\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:07.194097       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00d2941494215f317\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:07.280868       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:07.288225       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:07.728023       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-65d420e7-380c-4692-a6f9-ada48c9b9335\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1389^f955a402-1789-11ec-b010-46b444ec8fcc\") on node \"ip-172-20-51-79.eu-west-2.compute.internal\" \nI0917 07:37:07.944002       1 utils.go:366] couldn't find ipfamilies for headless service: services-7607/clusterip-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0917 07:37:07.986296       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nI0917 07:37:08.058797       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-644\nI0917 07:37:08.546060       1 namespace_controller.go:185] Namespace has been deleted container-runtime-3966\nI0917 07:37:08.564765       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 1\"\nI0917 07:37:08.564914       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" need=1 deleting=1\nI0917 07:37:08.564943       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0917 07:37:08.565020       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-505/test-rolling-update-with-lb-5ff6986c95-9dpbj\"\nI0917 07:37:08.583189       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-9dpbj\"\nI0917 07:37:08.585470       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" need=3 creating=1\nI0917 07:37:08.586039       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 3\"\nI0917 07:37:08.594385       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:08.599131       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-lvc7d\"\nI0917 07:37:08.651020       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-8406/test-quota\nI0917 07:37:08.962819       1 utils.go:366] couldn't find ipfamilies for headless service: services-7607/clusterip-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0917 07:37:08.979318       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0917 07:37:09.422401       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-2931/default: secrets \"default-token-5hxfk\" is forbidden: unable to create new content in namespace volumemode-2931 because it is being terminated\nI0917 07:37:09.525525       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" need=0 deleting=1\nI0917 07:37:09.525749       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0917 07:37:09.525995       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-505/test-rolling-update-with-lb-5ff6986c95-zksgj\"\nI0917 07:37:09.528933       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 0\"\nI0917 07:37:09.549477       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-zksgj\"\nW0917 07:37:09.572828       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-505/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0917 07:37:09.574377       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:09.574685       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint deployment-505/test-rolling-update-with-lb: Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0917 07:37:09.618006       1 tokens_controller.go:262] error synchronizing serviceaccount dns-2431/default: secrets \"default-token-h5gss\" is forbidden: unable to create new content in namespace dns-2431 because it is being terminated\nI0917 07:37:09.660310       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:09.660684       1 event.go:291] \"Event occurred\" object=\"volume-2408/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"ebs.csi.aws.com-vol-0ee87200b524457f6\\\" \"\nI0917 07:37:09.845449       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-3559/awst8tx2\"\nI0917 07:37:09.857553       1 pv_controller.go:640] volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:37:09.863081       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-659/csi-hostpath4q2tg\"\nI0917 07:37:09.863272       1 pv_controller.go:879] volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" entered phase \"Released\"\nI0917 07:37:09.865427       1 pv_controller.go:1340] isVolumeReleased[pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6]: volume is released\nI0917 07:37:09.869094       1 pv_controller.go:640] volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" is released and reclaim policy \"Delete\" will be executed\nI0917 07:37:09.873660       1 pv_controller.go:879] volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" entered phase \"Released\"\nI0917 07:37:09.876701       1 pv_controller.go:1340] isVolumeReleased[pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc]: volume is released\nI0917 07:37:09.884542       1 pv_controller_base.go:505] deletion of claim \"provisioning-659/csi-hostpath4q2tg\" was already processed\nI0917 07:37:09.928897       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:09.931049       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:09.941539       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:09.945677       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:10.059283       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 1\"\nI0917 07:37:10.059603       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-686dff95d9\" need=1 creating=1\nI0917 07:37:10.065187       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-cn7jn\"\nI0917 07:37:10.076260       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-505/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:10.118390       1 namespace_controller.go:185] Namespace has been deleted services-4701\nE0917 07:37:10.272185       1 resource_quota_controller.go:253] Operation cannot be fulfilled on resourcequotas \"quota-terminating\": the object has been modified; please apply your changes to the latest version and try again\nI0917 07:37:10.302539       1 event.go:291] \"Event occurred\" object=\"pv-6906/pvc-lcqks\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"FailedBinding\" message=\"no persistent volumes available for this claim and no storage class is set\"\nI0917 07:37:10.408569       1 pv_controller.go:879] volume \"nfs-sk9kn\" entered phase \"Available\"\nI0917 07:37:10.714981       1 namespace_controller.go:185] Namespace has been deleted replicaset-4359\nE0917 07:37:10.913552       1 tokens_controller.go:262] error synchronizing serviceaccount volume-1389/default: secrets \"default-token-nw6zt\" is forbidden: unable to create new content in namespace volume-1389 because it is being terminated\nI0917 07:37:11.000807       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-7634/sample-webhook-deployment\"\nE0917 07:37:13.001996       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0917 07:37:13.379792       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:13.404674       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-659^0a4e555e-178a-11ec-b13b-0a3c66f3c28b\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:37:13.408138       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-659^0a4e555e-178a-11ec-b13b-0a3c66f3c28b\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:37:13.757852       1 namespace_controller.go:185] Namespace has been deleted resourcequota-8406\nI0917 07:37:13.978008       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-2d9cc0a0-22f6-4008-bd83-7d4f84b848dc\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-659^0a4e555e-178a-11ec-b13b-0a3c66f3c28b\") on node \"ip-172-20-53-192.eu-west-2.compute.internal\" \nI0917 07:37:14.029510       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00f65cf2fda4849b6\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:14.074843       1 event.go:291] \"Event occurred\" object=\"provisioning-2668/awsvfqll\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0917 07:37:14.111483       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-8997/default: secrets \"default-token-sd2n6\" is forbidden: unable to create new content in namespace kubectl-8997 because it is being terminated\nI0917 07:37:14.278575       1 event.go:291] \"Event occurred\" object=\"provisioning-2668/awsvfqll\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE0917 07:37:14.368362       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:14.416328       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-9228/test-quota\nI0917 07:37:14.611900       1 namespace_controller.go:185] Namespace has been deleted volumemode-2931\nI0917 07:37:14.747789       1 namespace_controller.go:185] Namespace has been deleted dns-2431\nE0917 07:37:14.848671       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:15.000223       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"apply-5588/deployment\"\nE0917 07:37:15.026045       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-4384/default: secrets \"default-token-wdknx\" is forbidden: unable to create new content in namespace downward-api-4384 because it is being terminated\nE0917 07:37:15.201630       1 tokens_controller.go:262] error synchronizing serviceaccount apply-9325/default: secrets \"default-token-hwfnw\" is forbidden: unable to create new content in namespace apply-9325 because it is being terminated\nI0917 07:37:15.209051       1 pv_controller.go:930] claim \"pv-6906/pvc-lcqks\" bound to volume \"nfs-sk9kn\"\nI0917 07:37:15.209706       1 event.go:291] \"Event occurred\" object=\"provisioning-2668/awsvfqll\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:37:15.209729       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-6/pvc-blrdj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:37:15.221091       1 pv_controller.go:1340] isVolumeReleased[pvc-fab315b8-8370-4405-835a-d9e7bc1bed3e]: volume is released\nI0917 07:37:15.221810       1 pv_controller.go:1340] isVolumeReleased[pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6]: volume is released\nI0917 07:37:15.222508       1 pv_controller.go:1340] isVolumeReleased[pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250]: volume is released\nI0917 07:37:15.227363       1 pv_controller.go:879] volume \"nfs-sk9kn\" entered phase \"Bound\"\nI0917 07:37:15.227417       1 pv_controller.go:982] volume \"nfs-sk9kn\" bound to claim \"pv-6906/pvc-lcqks\"\nI0917 07:37:15.242475       1 pv_controller.go:823] claim \"pv-6906/pvc-lcqks\" entered phase \"Bound\"\nE0917 07:37:15.332819       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-659/default: secrets \"default-token-cvqtt\" is forbidden: unable to create new content in namespace provisioning-659 because it is being terminated\nI0917 07:37:15.351851       1 namespace_controller.go:185] Namespace has been deleted e2e-kubelet-etc-hosts-8015\nI0917 07:37:15.819114       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" need=2 deleting=1\nI0917 07:37:15.819343       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" relatedReplicaSets=[test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4 test-rolling-update-with-lb-686dff95d9 test-rolling-update-with-lb-864fb64577]\nI0917 07:37:15.819597       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-59c4fc87b4\" pod=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4-2kvd4\"\nI0917 07:37:15.842821       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-59c4fc87b4 to 2\"\nI0917 07:37:15.911640       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 2\"\nI0917 07:37:15.912766       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-686dff95d9\" need=2 creating=1\nI0917 07:37:15.923487       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-59c4fc87b4-2kvd4\"\nI0917 07:37:15.964275       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-f5gls\"\nI0917 07:37:16.047997       1 namespace_controller.go:185] Namespace has been deleted volume-1389\nI0917 07:37:16.055202       1 stateful_set.go:440] StatefulSet has been deleted volume-1389-3611/csi-hostpathplugin\nI0917 07:37:16.055360       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1389-3611/csi-hostpathplugin-0\" objectUID=7ed764b4-f423-41ca-beb7-4839fe8550cd kind=\"Pod\" virtual=false\nI0917 07:37:16.055430       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1389-3611/csi-hostpathplugin-6587b75ffc\" objectUID=c7e095ff-674d-44c7-ac7e-0582320c4b4a kind=\"ControllerRevision\" virtual=false\nI0917 07:37:16.091349       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1389-3611/csi-hostpathplugin-6587b75ffc\" objectUID=c7e095ff-674d-44c7-ac7e-0582320c4b4a kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:37:16.109077       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1389-3611/csi-hostpathplugin-0\" objectUID=7ed764b4-f423-41ca-beb7-4839fe8550cd kind=\"Pod\" propagationPolicy=Background\nE0917 07:37:16.221115       1 tokens_controller.go:262] error synchronizing serviceaccount topology-2340/default: secrets \"default-token-7skhl\" is forbidden: unable to create new content in namespace topology-2340 because it is being terminated\nI0917 07:37:16.343191       1 garbagecollector.go:471] \"Processing object\" object=\"services-7607/externalsvc-qmfd5\" objectUID=25772d6b-e054-4290-a96e-7906a72e25c6 kind=\"Pod\" virtual=false\nI0917 07:37:16.343418       1 garbagecollector.go:471] \"Processing object\" object=\"services-7607/externalsvc-bg9mx\" objectUID=7663af77-a85c-4676-9d5c-b600c739bb68 kind=\"Pod\" virtual=false\nI0917 07:37:16.346317       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7607/externalsvc-qmfd5\" objectUID=25772d6b-e054-4290-a96e-7906a72e25c6 kind=\"Pod\" propagationPolicy=Background\nI0917 07:37:16.346622       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7607/externalsvc-bg9mx\" objectUID=7663af77-a85c-4676-9d5c-b600c739bb68 kind=\"Pod\" propagationPolicy=Background\nW0917 07:37:16.375068       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-7607/externalsvc\", retrying. Error: EndpointSlice informer cache is out of date\nE0917 07:37:16.659042       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0917 07:37:16.710376       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:16.751275       1 pv_controller.go:1340] isVolumeReleased[pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6]: volume is released\nI0917 07:37:16.832607       1 event.go:291] \"Event occurred\" object=\"statefulset-6579/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0917 07:37:16.833059       1 event.go:291] \"Event occurred\" object=\"statefulset-6579/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0917 07:37:16.841200       1 event.go:291] \"Event occurred\" object=\"statefulset-6579/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0917 07:37:16.860225       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-7dc69dbf-17ac-46a9-9596-65be27c177e6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cac65654da8c239f\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:16.864916       1 event.go:291] \"Event occurred\" object=\"statefulset-6579/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:37:16.899842       1 pv_controller_base.go:505] deletion of claim \"volume-expand-3559/awst8tx2\" was already processed\nE0917 07:37:17.406236       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-9619/default: secrets \"default-token-wdt4x\" is forbidden: unable to create new content in namespace downward-api-9619 because it is being terminated\nE0917 07:37:17.415027       1 pv_controller.go:1451] error finding provisioning plugin for claim volumemode-2309/pvc-qppsm: storageclass.storage.k8s.io \"volumemode-2309\" not found\nI0917 07:37:17.415110       1 event.go:291] \"Event occurred\" object=\"volumemode-2309/pvc-qppsm\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-2309\\\" not found\"\nI0917 07:37:17.517782       1 pv_controller.go:879] volume \"local-gkp2g\" entered phase \"Available\"\nI0917 07:37:17.634222       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" need=1 deleting=1\nI0917 07:37:17.634491       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4 test-rolling-update-with-lb-686dff95d9]\nI0917 07:37:17.634768       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-59c4fc87b4\" pod=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4-dn5jc\"\nI0917 07:37:17.639139       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-59c4fc87b4 to 1\"\nI0917 07:37:17.660851       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-686dff95d9\" need=3 creating=1\nI0917 07:37:17.662615       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 3\"\nI0917 07:37:17.666086       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-59c4fc87b4-dn5jc\"\nW0917 07:37:17.673484       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-505/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0917 07:37:17.676700       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-qlmln\"\nI0917 07:37:17.720733       1 namespace_controller.go:185] Namespace has been deleted proxy-8877\nI0917 07:37:17.741709       1 pv_controller.go:879] volume \"pvc-3cd9bbd5-a928-49b2-95a2-a3db006bb534\" entered phase \"Bound\"\nI0917 07:37:17.741902       1 pv_controller.go:982] volume \"pvc-3cd9bbd5-a928-49b2-95a2-a3db006bb534\" bound to claim \"provisioning-2668/awsvfqll\"\nI0917 07:37:17.758762       1 pv_controller.go:823] claim \"provisioning-2668/awsvfqll\" entered phase \"Bound\"\nI0917 07:37:17.962799       1 namespace_controller.go:185] Namespace has been deleted clientset-2522\nI0917 07:37:18.183200       1 namespace_controller.go:185] Namespace has been deleted kubectl-2181\nI0917 07:37:18.194894       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-1089/quota-not-terminating\nI0917 07:37:18.226987       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-1089/quota-terminating\nI0917 07:37:18.377246       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-3cd9bbd5-a928-49b2-95a2-a3db006bb534\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2e3adb90a26b955\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:37:18.426666       1 tokens_controller.go:262] error synchronizing serviceaccount request-timeout-460/default: secrets \"default-token-mnhjd\" is forbidden: unable to create new content in namespace request-timeout-460 because it is being terminated\nI0917 07:37:18.554417       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-59c4fc87b4 to 0\"\nI0917 07:37:18.555712       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" need=0 deleting=1\nI0917 07:37:18.555947       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4 test-rolling-update-with-lb-686dff95d9]\nI0917 07:37:18.556146       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-59c4fc87b4\" pod=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4-lvc7d\"\nI0917 07:37:18.583550       1 event.go:291] \"Event occurred\" object=\"deployment-505/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-59c4fc87b4-lvc7d\"\nW0917 07:37:18.595011       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-505/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0917 07:37:19.142002       1 garbagecollector.go:471] \"Processing object\" object=\"services-7607/externalsvc-78njb\" objectUID=9a25850b-3c99-40c7-95f0-9dd5a06ac229 kind=\"EndpointSlice\" virtual=false\nI0917 07:37:19.158986       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7607/externalsvc-78njb\" objectUID=9a25850b-3c99-40c7-95f0-9dd5a06ac229 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:37:19.255180       1 garbagecollector.go:471] \"Processing object\" object=\"services-7607/clusterip-service-drvns\" objectUID=96de4ca2-8dff-4c3a-9a90-7e46987fcf03 kind=\"EndpointSlice\" virtual=false\nI0917 07:37:19.255466       1 garbagecollector.go:471] \"Processing object\" object=\"services-7607/clusterip-service-grxcg\" objectUID=8f2b3e11-e9e7-49ff-89c3-c4638d377211 kind=\"EndpointSlice\" virtual=false\nI0917 07:37:19.258106       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7607/clusterip-service-grxcg\" objectUID=8f2b3e11-e9e7-49ff-89c3-c4638d377211 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:37:19.258241       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7607/clusterip-service-drvns\" objectUID=96de4ca2-8dff-4c3a-9a90-7e46987fcf03 kind=\"EndpointSlice\" propagationPolicy=Background\nI0917 07:37:19.373215       1 namespace_controller.go:185] Namespace has been deleted kubectl-8997\nI0917 07:37:19.492534       1 namespace_controller.go:185] Namespace has been deleted resourcequota-9228\nI0917 07:37:19.694838       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-6906/pvc-lcqks\"\nI0917 07:37:19.700022       1 pv_controller.go:640] volume \"nfs-sk9kn\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:37:19.703192       1 pv_controller.go:879] volume \"nfs-sk9kn\" entered phase \"Released\"\nI0917 07:37:20.098489       1 pv_controller_base.go:505] deletion of claim \"pv-6906/pvc-lcqks\" was already processed\nI0917 07:37:20.174857       1 namespace_controller.go:185] Namespace has been deleted downward-api-4384\nI0917 07:37:20.267049       1 pv_controller.go:879] volume \"hostpath-9b5h5\" entered phase \"Available\"\nI0917 07:37:20.303017       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-659-3916/csi-hostpathplugin-0\" objectUID=dc18501a-905e-426f-8ada-40de5a23707a kind=\"Pod\" virtual=false\nI0917 07:37:20.303727       1 stateful_set.go:440] StatefulSet has been deleted provisioning-659-3916/csi-hostpathplugin\nI0917 07:37:20.303784       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-659-3916/csi-hostpathplugin-564b578645\" objectUID=d3feeebb-9c8d-4aed-b4ad-0cbc16561b65 kind=\"ControllerRevision\" virtual=false\nI0917 07:37:20.309659       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-659-3916/csi-hostpathplugin-0\" objectUID=dc18501a-905e-426f-8ada-40de5a23707a kind=\"Pod\" propagationPolicy=Background\nI0917 07:37:20.312501       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-659-3916/csi-hostpathplugin-564b578645\" objectUID=d3feeebb-9c8d-4aed-b4ad-0cbc16561b65 kind=\"ControllerRevision\" propagationPolicy=Background\nI0917 07:37:20.390411       1 event.go:291] \"Event occurred\" object=\"webhook-1433/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0917 07:37:20.390819       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-1433/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0917 07:37:20.418212       1 namespace_controller.go:185] Namespace has been deleted provisioning-659\nI0917 07:37:20.418508       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-1433/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:20.482601       1 pv_controller.go:879] volume \"pvc-fc37d2b5-ef4f-4c8b-b660-e17cafca250d\" entered phase \"Bound\"\nI0917 07:37:20.482637       1 pv_controller.go:982] volume \"pvc-fc37d2b5-ef4f-4c8b-b660-e17cafca250d\" bound to claim \"statefulset-6579/datadir-ss-0\"\nI0917 07:37:20.487550       1 event.go:291] \"Event occurred\" object=\"webhook-1433/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-xmg5r\"\nE0917 07:37:20.534269       1 tokens_controller.go:262] error synchronizing serviceaccount projected-5977/default: secrets \"default-token-jllsb\" is forbidden: unable to create new content in namespace projected-5977 because it is being terminated\nI0917 07:37:20.552955       1 pv_controller.go:823] claim \"statefulset-6579/datadir-ss-0\" entered phase \"Bound\"\nI0917 07:37:20.612398       1 pv_controller.go:930] claim \"pv-protection-7035/pvc-dbdsg\" bound to volume \"hostpath-9b5h5\"\nI0917 07:37:20.670379       1 pv_controller.go:879] volume \"hostpath-9b5h5\" entered phase \"Bound\"\nI0917 07:37:20.671531       1 pv_controller.go:982] volume \"hostpath-9b5h5\" bound to claim \"pv-protection-7035/pvc-dbdsg\"\nI0917 07:37:20.693231       1 pv_controller.go:823] claim \"pv-protection-7035/pvc-dbdsg\" entered phase \"Bound\"\nE0917 07:37:20.770132       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:20.935639       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-fc37d2b5-ef4f-4c8b-b660-e17cafca250d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b512de46e73a421a\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nE0917 07:37:20.996868       1 tokens_controller.go:262] error synchronizing serviceaccount init-container-1815/default: secrets \"default-token-55tfj\" is forbidden: unable to create new content in namespace init-container-1815 because it is being terminated\nE0917 07:37:21.014125       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9603/default: secrets \"default-token-vmpq9\" is forbidden: unable to create new content in namespace provisioning-9603 because it is being terminated\nI0917 07:37:21.038243       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-protection-7035/pvc-dbdsg\"\nI0917 07:37:21.049320       1 pv_controller.go:640] volume \"hostpath-9b5h5\" is released and reclaim policy \"Retain\" will be executed\nI0917 07:37:21.058405       1 pv_controller.go:879] volume \"hostpath-9b5h5\" entered phase \"Released\"\nI0917 07:37:21.082568       1 pv_controller_base.go:505] deletion of claim \"pv-protection-7035/pvc-dbdsg\" was already processed\nE0917 07:37:21.256574       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-9528/default: secrets \"default-token-9md55\" is forbidden: unable to create new content in namespace emptydir-9528 because it is being terminated\nI0917 07:37:21.514155       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05297dadf6ba71de2\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:21.526773       1 namespace_controller.go:185] Namespace has been deleted topology-2340\nI0917 07:37:21.541015       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05297dadf6ba71de2\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nE0917 07:37:21.619019       1 tokens_controller.go:262] error synchronizing serviceaccount volume-1389-3611/default: secrets \"default-token-vnv9s\" is forbidden: unable to create new content in namespace volume-1389-3611 because it is being terminated\nE0917 07:37:21.865903       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:22.000385       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-3276/httpd-deployment\"\nI0917 07:37:22.336816       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5740/test-rollover-controller\" need=1 creating=1\nI0917 07:37:22.341498       1 event.go:291] \"Event occurred\" object=\"deployment-5740/test-rollover-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-controller-wqt8r\"\nI0917 07:37:22.773660       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00d2941494215f317\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:22.803033       1 pv_controller.go:1340] isVolumeReleased[pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250]: volume is released\nI0917 07:37:22.852218       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00d2941494215f317\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:22.873746       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-3cd9bbd5-a928-49b2-95a2-a3db006bb534\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2e3adb90a26b955\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:22.873913       1 event.go:291] \"Event occurred\" object=\"provisioning-2668/pod-subpath-test-dynamicpv-744r\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3cd9bbd5-a928-49b2-95a2-a3db006bb534\\\" \"\nI0917 07:37:22.974441       1 pv_controller_base.go:505] deletion of claim \"provisioning-5320/pvc-7677p\" was already processed\nI0917 07:37:23.048904       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-29156ee5-71ec-4a35-a8d1-ef8a1f64a250\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0bc052de5e9824396\") on node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:23.298194       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05297dadf6ba71de2\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:23.298269       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2969/pod-0dff4a4c-a0f4-4ba0-bc89-0a27e0b3893f\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-9c3d4ba6-67d1-40ba-9d5a-382e94532965\\\" \"\nI0917 07:37:23.329289       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-fc37d2b5-ef4f-4c8b-b660-e17cafca250d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b512de46e73a421a\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:23.329512       1 event.go:291] \"Event occurred\" object=\"statefulset-6579/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-fc37d2b5-ef4f-4c8b-b660-e17cafca250d\\\" \"\nI0917 07:37:23.408733       1 namespace_controller.go:185] Namespace has been deleted resourcequota-1089\nE0917 07:37:23.671853       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9280/default: secrets \"default-token-h254q\" is forbidden: unable to create new content in namespace provisioning-9280 because it is being terminated\nI0917 07:37:23.694828       1 namespace_controller.go:185] Namespace has been deleted request-timeout-460\nE0917 07:37:24.010018       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0917 07:37:24.635531       1 tokens_controller.go:262] error synchronizing serviceaccount services-7607/default: secrets \"default-token-n28kd\" is forbidden: unable to create new content in namespace services-7607 because it is being terminated\nI0917 07:37:24.682753       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:37:25.179031       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:25.242864       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00d2941494215f317\") from node \"ip-172-20-33-78.eu-west-2.compute.internal\" \nI0917 07:37:25.243270       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-6673/pod-654b0bdc-9c52-46ea-85fa-902c527262a5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-edb1ea7e-bdaa-42b2-847d-4b18bb2a7b4b\\\" \"\nE0917 07:37:25.565319       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-3559/default: secrets \"default-token-kxk2w\" is forbidden: unable to create new content in namespace volume-expand-3559 because it is being terminated\nI0917 07:37:25.810917       1 namespace_controller.go:185] Namespace has been deleted projected-5977\nE0917 07:37:25.827087       1 csi_attacher.go:711] kubernetes.io/csi: attachment for vol-0e9ad1bc53e5ce2c3 failed: rpc error: code = Internal desc = Could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\n\tstatus code: 400, request id: 279be38b-1318-45b8-a8fb-ad75fe6a7085\nE0917 07:37:25.827300       1 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3 podName: nodeName:}\" failed. No retries permitted until 2021-09-17 07:37:26.327276491 +0000 UTC m=+1149.663007757 (durationBeforeRetry 500ms). Error: AttachVolume.Attach failed for volume \"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" : rpc error: code = Internal desc = Could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\n\tstatus code: 400, request id: 279be38b-1318-45b8-a8fb-ad75fe6a7085\nI0917 07:37:25.827561       1 event.go:291] \"Event occurred\" object=\"volume-6716/exec-volume-test-inlinevolume-spr6\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"AttachVolume.Attach failed for volume \\\"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\\\" : rpc error: code = Internal desc = Could not attach volume \\\"vol-0e9ad1bc53e5ce2c3\\\" to node \\\"i-0aa984ac7bb70ba77\\\": could not attach volume \\\"vol-0e9ad1bc53e5ce2c3\\\" to node \\\"i-0aa984ac7bb70ba77\\\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\\n\\tstatus code: 400, request id: 279be38b-1318-45b8-a8fb-ad75fe6a7085\"\nI0917 07:37:25.939904       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:25.946732       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:25.949004       1 event.go:291] \"Event occurred\" object=\"job-9342/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: suspend-false-to-true--1-tn247\"\nI0917 07:37:25.955480       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:25.957171       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:25.963164       1 event.go:291] \"Event occurred\" object=\"job-9342/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: suspend-false-to-true--1-fmk7j\"\nI0917 07:37:25.970886       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:25.982277       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:25.994079       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:26.139412       1 namespace_controller.go:185] Namespace has been deleted init-container-1815\nI0917 07:37:26.169212       1 namespace_controller.go:185] Namespace has been deleted provisioning-9603\nI0917 07:37:26.330251       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:26.345851       1 namespace_controller.go:185] Namespace has been deleted emptydir-9528\nI0917 07:37:26.351776       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:26.432174       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5884\nI0917 07:37:26.634423       1 namespace_controller.go:185] Namespace has been deleted volume-1389-3611\nI0917 07:37:26.840259       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5740/test-rollover-deployment-78bc8b888c\" need=1 creating=1\nI0917 07:37:26.841125       1 event.go:291] \"Event occurred\" object=\"deployment-5740/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rollover-deployment-78bc8b888c to 1\"\nI0917 07:37:26.845697       1 event.go:291] \"Event occurred\" object=\"deployment-5740/test-rollover-deployment-78bc8b888c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-deployment-78bc8b888c-6ttzh\"\nI0917 07:37:26.856844       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5740/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0917 07:37:26.874992       1 csi_attacher.go:711] kubernetes.io/csi: attachment for vol-0e9ad1bc53e5ce2c3 failed: rpc error: code = Internal desc = Could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\n\tstatus code: 400, request id: 5ec28d3b-cfe1-42ae-87ba-743bd711d6ec\nI0917 07:37:26.875017       1 actual_state_of_world.go:350] Volume \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\" is already added to attachedVolume list to node \"ip-172-20-60-186.eu-west-2.compute.internal\", update device path \"\"\nE0917 07:37:26.875224       1 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3 podName: nodeName:}\" failed. No retries permitted until 2021-09-17 07:37:27.875176407 +0000 UTC m=+1151.210907673 (durationBeforeRetry 1s). Error: AttachVolume.Attach failed for volume \"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" : rpc error: code = Internal desc = Could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\n\tstatus code: 400, request id: 5ec28d3b-cfe1-42ae-87ba-743bd711d6ec\nI0917 07:37:26.875363       1 event.go:291] \"Event occurred\" object=\"volume-6716/exec-volume-test-inlinevolume-spr6\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"AttachVolume.Attach failed for volume \\\"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\\\" : rpc error: code = Internal desc = Could not attach volume \\\"vol-0e9ad1bc53e5ce2c3\\\" to node \\\"i-0aa984ac7bb70ba77\\\": could not attach volume \\\"vol-0e9ad1bc53e5ce2c3\\\" to node \\\"i-0aa984ac7bb70ba77\\\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\\n\\tstatus code: 400, request id: 5ec28d3b-cfe1-42ae-87ba-743bd711d6ec\"\nE0917 07:37:26.922908       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-6481/pvc-tl2m5: storageclass.storage.k8s.io \"provisioning-6481\" not found\nI0917 07:37:26.923203       1 event.go:291] \"Event occurred\" object=\"provisioning-6481/pvc-tl2m5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6481\\\" not found\"\nI0917 07:37:27.030896       1 pv_controller.go:879] volume \"local-v6cg8\" entered phase \"Available\"\nI0917 07:37:27.179726       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:27.191009       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"ebs.csi.aws.com-vol-0ee87200b524457f6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ee87200b524457f6\") on node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nE0917 07:37:27.301444       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:27.437731       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-6/pvc-blrdj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0917 07:37:27.458996       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-provisioning-6/pvc-blrdj\"\nI0917 07:37:27.515923       1 pv_controller_base.go:505] deletion of claim \"pvc-protection-6650/pvc-protectionbz8gz\" was already processed\nI0917 07:37:27.586096       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:27.697183       1 event.go:291] \"Event occurred\" object=\"deployment-5740/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rollover-deployment-78bc8b888c to 0\"\nI0917 07:37:27.697712       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5740/test-rollover-deployment-78bc8b888c\" need=0 deleting=1\nI0917 07:37:27.697865       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5740/test-rollover-deployment-78bc8b888c\" relatedReplicaSets=[test-rollover-controller test-rollover-deployment-78bc8b888c test-rollover-deployment-98c5f4599]\nI0917 07:37:27.698199       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rollover-deployment-78bc8b888c\" pod=\"deployment-5740/test-rollover-deployment-78bc8b888c-6ttzh\"\nI0917 07:37:27.722044       1 namespace_controller.go:185] Namespace has been deleted kubectl-3426\nI0917 07:37:27.745661       1 namespace_controller.go:185] Namespace has been deleted downward-api-9619\nI0917 07:37:27.757478       1 event.go:291] \"Event occurred\" object=\"deployment-5740/test-rollover-deployment-78bc8b888c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rollover-deployment-78bc8b888c-6ttzh\"\nI0917 07:37:27.757702       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5740/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:27.767153       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5740/test-rollover-deployment-98c5f4599\" need=1 creating=1\nI0917 07:37:27.772116       1 event.go:291] \"Event occurred\" object=\"deployment-5740/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rollover-deployment-98c5f4599 to 1\"\nI0917 07:37:27.780741       1 event.go:291] \"Event occurred\" object=\"deployment-5740/test-rollover-deployment-98c5f4599\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-deployment-98c5f4599-glbbs\"\nI0917 07:37:27.814872       1 namespace_controller.go:185] Namespace has been deleted projected-2081\nI0917 07:37:27.845939       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5740/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0917 07:37:27.893939       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:27.953939       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nE0917 07:37:28.040396       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-4172/default: secrets \"default-token-p668f\" is forbidden: unable to create new content in namespace disruption-4172 because it is being terminated\nI0917 07:37:28.229996       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:28.230410       1 controller_utils.go:592] \"Deleting pod\" controller=\"suspend-false-to-true\" pod=\"job-9342/suspend-false-to-true--1-tn247\"\nI0917 07:37:28.230811       1 controller_utils.go:592] \"Deleting pod\" controller=\"suspend-false-to-true\" pod=\"job-9342/suspend-false-to-true--1-fmk7j\"\nI0917 07:37:28.236558       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:28.239973       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nI0917 07:37:28.240165       1 event.go:291] \"Event occurred\" object=\"job-9342/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: suspend-false-to-true--1-tn247\"\nI0917 07:37:28.240637       1 event.go:291] \"Event occurred\" object=\"job-9342/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: suspend-false-to-true--1-fmk7j\"\nI0917 07:37:28.240657       1 event.go:291] \"Event occurred\" object=\"job-9342/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Suspended\" message=\"Job suspended\"\nI0917 07:37:28.248355       1 event.go:291] \"Event occurred\" object=\"job-9342/suspend-false-to-true\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Suspended\" message=\"Job suspended\"\nI0917 07:37:28.248909       1 job_controller.go:406] enqueueing job job-9342/suspend-false-to-true\nE0917 07:37:28.439000       1 csi_attacher.go:711] kubernetes.io/csi: attachment for vol-0e9ad1bc53e5ce2c3 failed: rpc error: code = Internal desc = Could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\n\tstatus code: 400, request id: 5ec28d3b-cfe1-42ae-87ba-743bd711d6ec\nI0917 07:37:28.439211       1 actual_state_of_world.go:350] Volume \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\" is already added to attachedVolume list to node \"ip-172-20-60-186.eu-west-2.compute.internal\", update device path \"\"\nE0917 07:37:28.439410       1 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3 podName: nodeName:}\" failed. No retries permitted until 2021-09-17 07:37:30.439365256 +0000 UTC m=+1153.775096515 (durationBeforeRetry 2s). Error: AttachVolume.Attach failed for volume \"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" : rpc error: code = Internal desc = Could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": could not attach volume \"vol-0e9ad1bc53e5ce2c3\" to node \"i-0aa984ac7bb70ba77\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\n\tstatus code: 400, request id: 5ec28d3b-cfe1-42ae-87ba-743bd711d6ec\nI0917 07:37:28.439870       1 event.go:291] \"Event occurred\" object=\"volume-6716/exec-volume-test-inlinevolume-spr6\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"AttachVolume.Attach failed for volume \\\"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\\\" : rpc error: code = Internal desc = Could not attach volume \\\"vol-0e9ad1bc53e5ce2c3\\\" to node \\\"i-0aa984ac7bb70ba77\\\": could not attach volume \\\"vol-0e9ad1bc53e5ce2c3\\\" to node \\\"i-0aa984ac7bb70ba77\\\": IncorrectState: vol-0e9ad1bc53e5ce2c3 is not 'available'.\\n\\tstatus code: 400, request id: 5ec28d3b-cfe1-42ae-87ba-743bd711d6ec\"\nE0917 07:37:28.451136       1 tokens_controller.go:262] error synchronizing serviceaccount crd-publish-openapi-5590/default: secrets \"default-token-8bm8w\" is forbidden: unable to create new content in namespace crd-publish-openapi-5590 because it is being terminated\nE0917 07:37:28.584647       1 namespace_controller.go:162] deletion of namespace disruption-6912 failed: unexpected items still remain in namespace: disruption-6912 for gvr: /v1, Resource=pods\nI0917 07:37:28.698225       1 namespace_controller.go:185] Namespace has been deleted provisioning-9280\nI0917 07:37:28.806815       1 replica_set.go:453] ReplicaSet \"test-rollover-deployment-98c5f4599\" will be enqueued after 10s for availability check\nE0917 07:37:29.647528       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-test-5245/default: secrets \"default-token-9m4rr\" is forbidden: unable to create new content in namespace security-context-test-5245 because it is being terminated\nI0917 07:37:29.709634       1 namespace_controller.go:185] Namespace has been deleted services-7607\nE0917 07:37:30.160803       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:30.209231       1 pv_controller.go:930] claim \"provisioning-6481/pvc-tl2m5\" bound to volume \"local-v6cg8\"\nI0917 07:37:30.216429       1 pv_controller.go:879] volume \"local-v6cg8\" entered phase \"Bound\"\nI0917 07:37:30.216663       1 pv_controller.go:982] volume \"local-v6cg8\" bound to claim \"provisioning-6481/pvc-tl2m5\"\nI0917 07:37:30.222757       1 pv_controller.go:823] claim \"provisioning-6481/pvc-tl2m5\" entered phase \"Bound\"\nI0917 07:37:30.223216       1 pv_controller.go:930] claim \"volumemode-2309/pvc-qppsm\" bound to volume \"local-gkp2g\"\nI0917 07:37:30.229825       1 pv_controller.go:879] volume \"local-gkp2g\" entered phase \"Bound\"\nI0917 07:37:30.229855       1 pv_controller.go:982] volume \"local-gkp2g\" bound to claim \"volumemode-2309/pvc-qppsm\"\nI0917 07:37:30.238746       1 pv_controller.go:823] claim \"volumemode-2309/pvc-qppsm\" entered phase \"Bound\"\nE0917 07:37:30.389018       1 tokens_controller.go:262] error synchronizing serviceaccount volume-limits-on-node-7181/default: secrets \"default-token-sv5cr\" is forbidden: unable to create new content in namespace volume-limits-on-node-7181 because it is being terminated\nE0917 07:37:30.405303       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-6823/default: secrets \"default-token-8b5n7\" is forbidden: unable to create new content in namespace downward-api-6823 because it is being terminated\nI0917 07:37:30.519964       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:30.857204       1 namespace_controller.go:185] Namespace has been deleted volume-expand-3559\nI0917 07:37:31.031174       1 namespace_controller.go:185] Namespace has been deleted provisioning-659-3916\nI0917 07:37:31.092695       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\") from node \"ip-172-20-60-186.eu-west-2.compute.internal\" \nI0917 07:37:31.092751       1 actual_state_of_world.go:350] Volume \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e9ad1bc53e5ce2c3\" is already added to attachedVolume list to node \"ip-172-20-60-186.eu-west-2.compute.internal\", update device path \"\"\nI0917 07:37:31.092886       1 event.go:291] \"Event occurred\" object=\"volume-6716/exec-volume-test-inlinevolume-spr6\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"ebs.csi.aws.com-vol-0e9ad1bc53e5ce2c3\\\" \"\nI0917 07:37:31.330487       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [crd-publish-openapi-test-foo.example.com/v1, Resource=e2e-test-crd-publish-openapi-9078-crds crd-publish-openapi-test-waldo.example.com/v1beta1, Resource=e2e-test-crd-publish-openapi-5067-crds], removed: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-902-crds crd-publish-openapi-test-multi-ver.example.com/v3, Resource=e2e-test-crd-publish-openapi-6303-crds]\nI0917 07:37:31.330852       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-9078-crds.crd-publish-openapi-test-foo.example.com\nI0917 07:37:31.331175       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-5067-crds.crd-publish-openapi-test-waldo.example.com\nI0917 07:37:31.331349       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0917 07:37:31.433173       1 shared_informer.go:247] Caches are synced for resource quota \nI0917 07:37:31.433200       1 resource_quota_controller.go:454] synced quota controller\nI0917 07:37:31.453575       1 namespace_controller.go:185] Namespace has been deleted pv-protection-7035\nI0917 07:37:31.835577       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd-publish-openapi-test-foo.example.com/v1, Resource=e2e-test-crd-publish-openapi-9078-crds crd-publish-openapi-test-waldo.example.com/v1beta1, Resource=e2e-test-crd-publish-openapi-5067-crds], removed: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-902-crds crd-publish-openapi-test-multi-ver.example.com/v3, Resource=e2e-test-crd-publish-openapi-6303-crds]\nI0917 07:37:31.864159       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0917 07:37:31.864256       1 shared_informer.go:247] Caches are synced for garbage collector \nI0917 07:37:31.864271       1 garbagecollector.go:254] synced garbage collector\nI0917 07:37:31.870841       1 namespace_controller.go:185] Namespace has been deleted downward-api-7899\nI0917 07:37:31.876659       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1433/e2e-test-webhook-swn59\" objectUID=ec976b01-0257-4bd3-a0d5-0b366cca662c kind=\"EndpointSlice\" virtual=false\nI0917 07:37:31.884867       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1433/e2e-test-webhook-swn59\" objectUID=ec976b01-0257-4bd3-a0d5-0b366cca662c kind=\"EndpointSlice\" propagationPolicy=Background\nE0917 07:37:32.028427       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0917 07:37:32.032921       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1433/sample-webhook-deployment-78988fc6cd\" objectUID=00ddd29f-9c2e-4c0a-b67c-103ff9f16a17 kind=\"ReplicaSet\" virtual=false\nI0917 07:37:32.033181       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-1433/sample-webhook-deployment\"\nI0917 07:37:32.041159       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1433/sample-webhook-deployment-78988fc6cd\" objectUID=00ddd29f-9c2e-4c0a-b67c-103ff9f16a17 kind=\"ReplicaSet\" propagationPolicy=Background\nI0917 07:37:32.045551       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1433/sample-webhook-deployment-78988fc6cd-xmg5r\" objectUID=ae10c926-fb24-4bcc-852c-2ea15595e519 kind=\"Pod\" virtual=false\nI0917 07:37:32.047545       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1433/sample-webhook-deployment-78988fc6cd-xmg5r\" objectUID=ae10c926-fb24-4bcc-852c-2ea15595e519 kind=\"Pod\" propagationPolicy=Background\nE0917 07:37:32.643417       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-6892/default: secrets \"default-token-wp47d\" is forbidden: unable to create new content in namespace volumemode-6892 because it is being terminated\nE0917 07:37:32.857117       1 tokens_controller.go:262] error synchronizing serviceaccount volume-provisioning-6/default: secrets \"default-token-ngmdf\" is forbidden: unable to create new content in namespace volume-provisioning-6 because it is being terminated\nE0917 07:37:32.910301       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5320/default: secrets \"default-token-xkq6z\" is forbidden: unable to create new content in namespace provisioning-5320 because it is being terminated\nE0917 07:37:33.426028       1 tokens_controller.go:262] error synchronizing serviceaccount pv-27/default: secrets \"default-token-7m8lc\" is forbidden: unable to create new content in namespace pv-27 because it is being terminated\nI0917 07:37:33.503968       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-5590\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-57-60.eu-west-2.compute.internal ====\n==== START logs for container install-cni of pod kube-system/kube-flannel-ds-68kmn ====\n==== END logs for container install-cni of pod kube-system/kube-flannel-ds-68kmn ====\n==== START logs for container kube-flannel of pod kube-system/kube-flannel-ds-68kmn ====\nI0917 07:19:56.319732       1 main.go:518] Determining IP address of default interface\nI0917 07:19:56.320210       1 main.go:531] Using interface with name ens5 and address 172.20.53.192\nI0917 07:19:56.320231       1 main.go:548] Defaulting external address to interface address (172.20.53.192)\nW0917 07:19:56.320249       1 client_config.go:517] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0917 07:19:56.335762       1 kube.go:119] Waiting 10m0s for node controller to sync\nI0917 07:19:56.335835       1 kube.go:306] Starting kube subnet manager\nI0917 07:19:57.336070       1 kube.go:126] Node controller sync successful\nI0917 07:19:57.336106       1 main.go:246] Created subnet manager: Kubernetes Subnet Manager - ip-172-20-53-192.eu-west-2.compute.internal\nI0917 07:19:57.336112       1 main.go:249] Installing signal handlers\nI0917 07:19:57.336241       1 main.go:390] Found network config - Backend type: vxlan\nI0917 07:19:57.336321       1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false\nI0917 07:19:57.363744       1 main.go:355] Current network or subnet (100.64.0.0/10, 100.96.1.0/24) is not equal to previous one (0.0.0.0/0, 0.0.0.0/0), trying to recycle old iptables rules\nI0917 07:19:57.385850       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:19:57.386713       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:19:57.387592       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:19:57.388395       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j MASQUERADE --random-fully\nI0917 07:19:57.389487       1 main.go:305] Setting up masking rules\nI0917 07:19:57.390157       1 main.go:313] Changing default FORWARD chain policy to ACCEPT\nI0917 07:19:57.390242       1 main.go:321] Wrote subnet file to /run/flannel/subnet.env\nI0917 07:19:57.390254       1 main.go:325] Running backend.\nI0917 07:19:57.390266       1 main.go:343] Waiting for all goroutines to exit\nI0917 07:19:57.390300       1 vxlan_network.go:60] watching for new subnet leases\nI0917 07:19:57.392257       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:19:57.392269       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:19:57.392498       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:19:57.392506       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:19:57.393208       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:19:57.393336       1 iptables.go:167] Deleting iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:19:57.394072       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.96.1.0/24 -j RETURN\nI0917 07:19:57.394835       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:19:57.395208       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\nI0917 07:19:57.396351       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:19:57.397393       1 iptables.go:155] Adding iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:19:57.399225       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:19:57.401422       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.96.1.0/24 -j RETURN\nI0917 07:19:57.403069       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\n==== END logs for container kube-flannel of pod kube-system/kube-flannel-ds-68kmn ====\n==== START logs for container install-cni of pod kube-system/kube-flannel-ds-88p4n ====\n==== END logs for container install-cni of pod kube-system/kube-flannel-ds-88p4n ====\n==== START logs for container kube-flannel of pod kube-system/kube-flannel-ds-88p4n ====\nI0917 07:20:11.858256       1 main.go:518] Determining IP address of default interface\nI0917 07:20:11.858536       1 main.go:531] Using interface with name ens5 and address 172.20.51.79\nI0917 07:20:11.858553       1 main.go:548] Defaulting external address to interface address (172.20.51.79)\nW0917 07:20:11.858569       1 client_config.go:517] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0917 07:20:11.866239       1 kube.go:119] Waiting 10m0s for node controller to sync\nI0917 07:20:11.866995       1 kube.go:306] Starting kube subnet manager\nI0917 07:20:12.867119       1 kube.go:126] Node controller sync successful\nI0917 07:20:12.867154       1 main.go:246] Created subnet manager: Kubernetes Subnet Manager - ip-172-20-51-79.eu-west-2.compute.internal\nI0917 07:20:12.867173       1 main.go:249] Installing signal handlers\nI0917 07:20:12.867342       1 main.go:390] Found network config - Backend type: vxlan\nI0917 07:20:12.867411       1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false\nI0917 07:20:12.889356       1 main.go:355] Current network or subnet (100.64.0.0/10, 100.96.3.0/24) is not equal to previous one (0.0.0.0/0, 0.0.0.0/0), trying to recycle old iptables rules\nI0917 07:20:12.907205       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:20:12.908120       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:12.908968       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:20:12.909743       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j MASQUERADE --random-fully\nI0917 07:20:12.910692       1 main.go:305] Setting up masking rules\nI0917 07:20:12.911328       1 main.go:313] Changing default FORWARD chain policy to ACCEPT\nI0917 07:20:12.911408       1 main.go:321] Wrote subnet file to /run/flannel/subnet.env\nI0917 07:20:12.911424       1 main.go:325] Running backend.\nI0917 07:20:12.911436       1 main.go:343] Waiting for all goroutines to exit\nI0917 07:20:12.911463       1 vxlan_network.go:60] watching for new subnet leases\nI0917 07:20:12.913461       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:20:12.913485       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:20:12.914002       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:20:12.914013       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:20:12.914597       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:12.914914       1 iptables.go:167] Deleting iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:20:12.915691       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.96.3.0/24 -j RETURN\nI0917 07:20:12.916542       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:20:12.917442       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\nI0917 07:20:12.918435       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:20:12.918905       1 iptables.go:155] Adding iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:20:12.920646       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:12.922387       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.96.3.0/24 -j RETURN\nI0917 07:20:12.923894       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\n==== END logs for container kube-flannel of pod kube-system/kube-flannel-ds-88p4n ====\n==== START logs for container install-cni of pod kube-system/kube-flannel-ds-bwnbx ====\n==== END logs for container install-cni of pod kube-system/kube-flannel-ds-bwnbx ====\n==== START logs for container kube-flannel of pod kube-system/kube-flannel-ds-bwnbx ====\nI0917 07:18:48.066902       1 main.go:518] Determining IP address of default interface\nI0917 07:18:48.067198       1 main.go:531] Using interface with name ens5 and address 172.20.57.60\nI0917 07:18:48.067217       1 main.go:548] Defaulting external address to interface address (172.20.57.60)\nW0917 07:18:48.067232       1 client_config.go:517] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0917 07:18:48.078955       1 kube.go:119] Waiting 10m0s for node controller to sync\nI0917 07:18:48.079721       1 kube.go:306] Starting kube subnet manager\nI0917 07:18:49.079757       1 kube.go:126] Node controller sync successful\nI0917 07:18:49.079785       1 main.go:246] Created subnet manager: Kubernetes Subnet Manager - ip-172-20-57-60.eu-west-2.compute.internal\nI0917 07:18:49.079805       1 main.go:249] Installing signal handlers\nI0917 07:18:49.079870       1 main.go:390] Found network config - Backend type: vxlan\nI0917 07:18:49.079917       1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false\nI0917 07:18:49.107150       1 main.go:355] Current network or subnet (100.64.0.0/10, 100.96.0.0/24) is not equal to previous one (0.0.0.0/0, 0.0.0.0/0), trying to recycle old iptables rules\nI0917 07:18:49.126520       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:18:49.131075       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:18:49.131869       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:18:49.132735       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j MASQUERADE --random-fully\nI0917 07:18:49.133569       1 main.go:305] Setting up masking rules\nI0917 07:18:49.134251       1 main.go:313] Changing default FORWARD chain policy to ACCEPT\nI0917 07:18:49.134319       1 main.go:321] Wrote subnet file to /run/flannel/subnet.env\nI0917 07:18:49.134327       1 main.go:325] Running backend.\nI0917 07:18:49.134338       1 main.go:343] Waiting for all goroutines to exit\nI0917 07:18:49.134356       1 vxlan_network.go:60] watching for new subnet leases\nI0917 07:18:49.136489       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:18:49.136501       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:18:49.136781       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:18:49.136792       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:18:49.137189       1 iptables.go:167] Deleting iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:18:49.137845       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:18:49.138121       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:18:49.138683       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.96.0.0/24 -j RETURN\nI0917 07:18:49.139449       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\nI0917 07:18:49.141274       1 iptables.go:155] Adding iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:18:49.142286       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:18:49.144339       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:18:49.145818       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.96.0.0/24 -j RETURN\nI0917 07:18:49.148506       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\n==== END logs for container kube-flannel of pod kube-system/kube-flannel-ds-bwnbx ====\n==== START logs for container install-cni of pod kube-system/kube-flannel-ds-c47nd ====\n==== END logs for container install-cni of pod kube-system/kube-flannel-ds-c47nd ====\n==== START logs for container kube-flannel of pod kube-system/kube-flannel-ds-c47nd ====\nI0917 07:20:11.811107       1 main.go:518] Determining IP address of default interface\nI0917 07:20:11.811626       1 main.go:531] Using interface with name ens5 and address 172.20.60.186\nI0917 07:20:11.811664       1 main.go:548] Defaulting external address to interface address (172.20.60.186)\nW0917 07:20:11.811682       1 client_config.go:517] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0917 07:20:11.822125       1 kube.go:119] Waiting 10m0s for node controller to sync\nI0917 07:20:11.823079       1 kube.go:306] Starting kube subnet manager\nI0917 07:20:12.824052       1 kube.go:126] Node controller sync successful\nI0917 07:20:12.824085       1 main.go:246] Created subnet manager: Kubernetes Subnet Manager - ip-172-20-60-186.eu-west-2.compute.internal\nI0917 07:20:12.824091       1 main.go:249] Installing signal handlers\nI0917 07:20:12.824205       1 main.go:390] Found network config - Backend type: vxlan\nI0917 07:20:12.824272       1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false\nI0917 07:20:12.848864       1 main.go:355] Current network or subnet (100.64.0.0/10, 100.96.2.0/24) is not equal to previous one (0.0.0.0/0, 0.0.0.0/0), trying to recycle old iptables rules\nI0917 07:20:12.884993       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:20:12.888416       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:12.890809       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:20:12.892645       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j MASQUERADE --random-fully\nI0917 07:20:12.893871       1 main.go:305] Setting up masking rules\nI0917 07:20:12.894616       1 main.go:313] Changing default FORWARD chain policy to ACCEPT\nI0917 07:20:12.894730       1 main.go:321] Wrote subnet file to /run/flannel/subnet.env\nI0917 07:20:12.894741       1 main.go:325] Running backend.\nI0917 07:20:12.894753       1 main.go:343] Waiting for all goroutines to exit\nI0917 07:20:12.894774       1 vxlan_network.go:60] watching for new subnet leases\nI0917 07:20:12.897724       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:20:12.897743       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:20:12.907531       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:20:12.907555       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:20:12.908880       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:12.908889       1 iptables.go:167] Deleting iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:20:12.911410       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:20:12.911669       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.96.2.0/24 -j RETURN\nI0917 07:20:12.914998       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\nI0917 07:20:12.915985       1 iptables.go:155] Adding iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:20:12.916692       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:20:12.919979       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:12.923135       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.96.2.0/24 -j RETURN\nI0917 07:20:12.924949       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\n==== END logs for container kube-flannel of pod kube-system/kube-flannel-ds-c47nd ====\n==== START logs for container install-cni of pod kube-system/kube-flannel-ds-lc9q8 ====\n==== END logs for container install-cni of pod kube-system/kube-flannel-ds-lc9q8 ====\n==== START logs for container kube-flannel of pod kube-system/kube-flannel-ds-lc9q8 ====\nI0917 07:20:14.492232       1 main.go:518] Determining IP address of default interface\nI0917 07:20:14.492575       1 main.go:531] Using interface with name ens5 and address 172.20.33.78\nI0917 07:20:14.492603       1 main.go:548] Defaulting external address to interface address (172.20.33.78)\nW0917 07:20:14.492621       1 client_config.go:517] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0917 07:20:14.502226       1 kube.go:119] Waiting 10m0s for node controller to sync\nI0917 07:20:14.502668       1 kube.go:306] Starting kube subnet manager\nI0917 07:20:15.507873       1 kube.go:126] Node controller sync successful\nI0917 07:20:15.507916       1 main.go:246] Created subnet manager: Kubernetes Subnet Manager - ip-172-20-33-78.eu-west-2.compute.internal\nI0917 07:20:15.507923       1 main.go:249] Installing signal handlers\nI0917 07:20:15.508158       1 main.go:390] Found network config - Backend type: vxlan\nI0917 07:20:15.508285       1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false\nI0917 07:20:15.536286       1 main.go:355] Current network or subnet (100.64.0.0/10, 100.96.4.0/24) is not equal to previous one (0.0.0.0/0, 0.0.0.0/0), trying to recycle old iptables rules\nI0917 07:20:15.554086       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:20:15.555151       1 iptables.go:167] Deleting iptables rule: -s 0.0.0.0/0 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:15.556074       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j RETURN\nI0917 07:20:15.556982       1 iptables.go:167] Deleting iptables rule: ! -s 0.0.0.0/0 -d 0.0.0.0/0 -j MASQUERADE --random-fully\nI0917 07:20:15.558047       1 main.go:305] Setting up masking rules\nI0917 07:20:15.558780       1 main.go:313] Changing default FORWARD chain policy to ACCEPT\nI0917 07:20:15.558895       1 main.go:321] Wrote subnet file to /run/flannel/subnet.env\nI0917 07:20:15.558907       1 main.go:325] Running backend.\nI0917 07:20:15.558919       1 main.go:343] Waiting for all goroutines to exit\nI0917 07:20:15.558938       1 vxlan_network.go:60] watching for new subnet leases\nI0917 07:20:15.562130       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:20:15.562148       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:20:15.564835       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:15.566249       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.96.4.0/24 -j RETURN\nI0917 07:20:15.567271       1 iptables.go:167] Deleting iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\nI0917 07:20:15.568097       1 iptables.go:145] Some iptables rules are missing; deleting and recreating rules\nI0917 07:20:15.568112       1 iptables.go:167] Deleting iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:20:15.568520       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -d 100.64.0.0/10 -j RETURN\nI0917 07:20:15.569310       1 iptables.go:167] Deleting iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:20:15.570660       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 -j ACCEPT\nI0917 07:20:15.571214       1 iptables.go:155] Adding iptables rule: -s 100.64.0.0/10 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully\nI0917 07:20:15.573155       1 iptables.go:155] Adding iptables rule: -d 100.64.0.0/10 -j ACCEPT\nI0917 07:20:15.573896       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.96.4.0/24 -j RETURN\nI0917 07:20:15.575885       1 iptables.go:155] Adding iptables rule: ! -s 100.64.0.0/10 -d 100.64.0.0/10 -j MASQUERADE --random-fully\n==== END logs for container kube-flannel of pod kube-system/kube-flannel-ds-lc9q8 ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-33-78.eu-west-2.compute.internal ====\nI0917 07:19:09.708613       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0917 07:19:09.708941       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0917 07:19:09.708955       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0917 07:19:09.708964       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0917 07:19:09.709014       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0917 07:19:09.709022       1 flags.go:59] FLAG: --cleanup=\"false\"\nI0917 07:19:09.709027       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0917 07:19:09.709033       1 flags.go:59] FLAG: --config=\"\"\nI0917 07:19:09.709039       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0917 07:19:09.709049       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0917 07:19:09.709055       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0917 07:19:09.709091       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0917 07:19:09.709097       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0917 07:19:09.709101       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0917 07:19:09.709187       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0917 07:19:09.709195       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0917 07:19:09.709201       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0917 07:19:09.709206       1 flags.go:59] FLAG: --help=\"false\"\nI0917 07:19:09.709212       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-33-78.eu-west-2.compute.internal\"\nI0917 07:19:09.709218       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0917 07:19:09.709223       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0917 07:19:09.709229       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0917 07:19:09.709234       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0917 07:19:09.709248       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0917 07:19:09.709273       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0917 07:19:09.709279       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0917 07:19:09.709284       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0917 07:19:09.709289       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0917 07:19:09.709293       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0917 07:19:09.709297       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0917 07:19:09.709301       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0917 07:19:09.709306       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0917 07:19:09.709329       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0917 07:19:09.709346       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0917 07:19:09.709366       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0917 07:19:09.709375       1 flags.go:59] FLAG: --log-dir=\"\"\nI0917 07:19:09.709380       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0917 07:19:09.709385       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0917 07:19:09.709390       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0917 07:19:09.709394       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0917 07:19:09.709399       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0917 07:19:09.709405       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0917 07:19:09.709428       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io\"\nI0917 07:19:09.709434       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0917 07:19:09.709439       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0917 07:19:09.709538       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0917 07:19:09.709547       1 flags.go:59] FLAG: --one-output=\"false\"\nI0917 07:19:09.709552       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0917 07:19:09.709558       1 flags.go:59] FLAG: --profiling=\"false\"\nI0917 07:19:09.709563       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0917 07:19:09.709573       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0917 07:19:09.709604       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0917 07:19:09.709634       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0917 07:19:09.709640       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0917 07:19:09.709644       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0917 07:19:09.709649       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0917 07:19:09.709654       1 flags.go:59] FLAG: --v=\"2\"\nI0917 07:19:09.709659       1 flags.go:59] FLAG: --version=\"false\"\nI0917 07:19:09.709666       1 flags.go:59] FLAG: --vmodule=\"\"\nI0917 07:19:09.709671       1 flags.go:59] FLAG: --write-config-to=\"\"\nW0917 07:19:09.709704       1 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0917 07:19:09.709825       1 feature_gate.go:245] feature gates: &{map[]}\nI0917 07:19:09.710130       1 feature_gate.go:245] feature gates: &{map[]}\nE0917 07:19:39.748689       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io/api/v1/nodes/ip-172-20-33-78.eu-west-2.compute.internal\": dial tcp 203.0.113.123:443: i/o timeout\nE0917 07:20:10.805770       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-8c8dd1f93e-d17d5.test-cncf-aws.k8s.io/api/v1/nodes/ip-172-20-33-78.eu-west-2.compute.internal\": dial tcp 203.0.113.123:443: i/o timeout\nI0917 07:20:12.945220       1 node.go:172] Successfully retrieved node IP: 172.20.33.78\nI0917 07:20:12.945247       1 server_others.go:140] Detected node IP 172.20.33.78\nW0917 07:20:12.945304       1 server_others.go:565] Unknown proxy mode \"\", assuming iptables proxy\nI0917 07:20:12.945413       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI0917 07:20:12.990256       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI0917 07:20:12.990298       1 server_others.go:212] Using iptables Proxier.\nI0917 07:20:12.990313       1 server_others.go:219] creating dualStackProxier for iptables.\nW0917 07:20:12.990333       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI0917 07:20:12.990423       1 utils.go:370] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0917 07:20:12.990490       1 proxier.go:281] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0917 07:20:12.990536       1 proxier.go:327] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0917 07:20:12.990582       1 proxier.go:337] \"Iptables supports --random-fully\" ipFamily=IPv4\nI0917 07:20:12.990641       1 proxier.go:281] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0917 07:20:12.990693       1 proxier.go:327] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0917 07:20:12.990708       1 proxier.go:337] \"Iptables supports --random-fully\" ipFamily=IPv6\nI0917 07:20:12.990889       1 server.go:649] Version: v1.22.2\nI0917 07:20:12.993118       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0917 07:20:12.993176       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0917 07:20:12.993233       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0917 07:20:12.994867       1 config.go:315] Starting service config controller\nI0917 07:20:12.994886       1 shared_informer.go:240] Waiting for caches to sync for service config\nI0917 07:20:12.994915       1 config.go:224] Starting endpoint slice config controller\nI0917 07:20:12.994925       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nE0917 07:20:13.002577       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"ip-172-20-33-78.eu-west-2.compute.internal.16a58af23c8160d8\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc0492e4b3b37a105, ext:63304071338, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:\"kube-proxy\", ReportingInstance:\"kube-proxy-ip-172-20-33-78\", Action:\"StartKubeProxy\", Reason:\"Starting\", Regarding:v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"ip-172-20-33-78.eu-west-2.compute.internal\", UID:\"ip-172-20-33-78.eu-west-2.compute.internal\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}, Related:(*v1.ObjectReference)(nil), Note:\"\", Type:\"Normal\", DeprecatedSource:v1.EventSource{Component:\"\", Host:\"\"}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event \"ip-172-20-33-78.eu-west-2.compute.internal.16a58af23c8160d8\" is invalid: involvedObject.namespace: Invalid value: \"\": does not match event.namespace' (will not retry!)\nI0917 07:20:13.002732       1 service.go:301] Service kube-system/kube-dns updated: 3 ports\nI0917 07:20:13.002762       1 service.go:301] Service default/kubernetes updated: 1 ports\nI0917 07:20:13.095066       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0917 07:20:13.095203       1 proxier.go:804] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0917 07:20:13.095422       1 proxier.go:804] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0917 07:20:13.095441       1 shared_informer.go:247] Caches are synced for service config \nI0917 07:20:13.095482       1 service.go:416] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0917 07:20:13.095499       1 service.go:416] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0917 07:20:13.095510       1 service.go:416] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0917 07:20:13.095521       1 service.go:416] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0917 07:20:13.095649       1 proxier.go:829] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0917 07:20:13.095668       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:20:13.174196       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"78.722121ms\"\nI0917 07:20:13.174241       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:20:13.241019       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"66.784821ms\"\nI0917 07:23:16.790049       1 service.go:301] Service services-1070/nodeport-update-service updated: 1 ports\nI0917 07:23:16.790143       1 service.go:416] Adding new service port \"services-1070/nodeport-update-service\" at 100.69.212.62:80/TCP\nI0917 07:23:16.790183       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:16.843893       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"53.744532ms\"\nI0917 07:23:16.843972       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:16.900303       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.35984ms\"\nI0917 07:23:17.012121       1 service.go:301] Service services-1070/nodeport-update-service updated: 1 ports\nI0917 07:23:17.901209       1 service.go:416] Adding new service port \"services-1070/nodeport-update-service:tcp-port\" at 100.69.212.62:80/TCP\nI0917 07:23:17.901237       1 service.go:441] Removing service port \"services-1070/nodeport-update-service\"\nI0917 07:23:17.901272       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:17.950175       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-1070/nodeport-update-service:tcp-port\\\" (:31720/tcp4)\"\nI0917 07:23:17.957218       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"56.024729ms\"\nI0917 07:23:26.857256       1 service.go:301] Service svc-latency-9976/latency-svc-xz592 updated: 1 ports\nI0917 07:23:26.857308       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-xz592\" at 100.70.21.203:80/TCP\nI0917 07:23:26.857344       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:26.900532       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"43.211424ms\"\nI0917 07:23:26.900605       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:26.943346       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"42.761985ms\"\nI0917 07:23:26.966716       1 service.go:301] Service svc-latency-9976/latency-svc-lpkxz updated: 1 ports\nI0917 07:23:26.972594       1 service.go:301] Service svc-latency-9976/latency-svc-r8mnb updated: 1 ports\nI0917 07:23:26.982413       1 service.go:301] Service svc-latency-9976/latency-svc-lqcm2 updated: 1 ports\nI0917 07:23:26.990469       1 service.go:301] Service svc-latency-9976/latency-svc-t78qm updated: 1 ports\nI0917 07:23:26.994669       1 service.go:301] Service svc-latency-9976/latency-svc-94dmp updated: 1 ports\nI0917 07:23:27.065823       1 service.go:301] Service svc-latency-9976/latency-svc-t8kcr updated: 1 ports\nI0917 07:23:27.075922       1 service.go:301] Service svc-latency-9976/latency-svc-m7prd updated: 1 ports\nI0917 07:23:27.093223       1 service.go:301] Service svc-latency-9976/latency-svc-cjztx updated: 1 ports\nI0917 07:23:27.103746       1 service.go:301] Service svc-latency-9976/latency-svc-k2bcg updated: 1 ports\nI0917 07:23:27.117048       1 service.go:301] Service svc-latency-9976/latency-svc-z646q updated: 1 ports\nI0917 07:23:27.119282       1 service.go:301] Service svc-latency-9976/latency-svc-bb78l updated: 1 ports\nI0917 07:23:27.130776       1 service.go:301] Service svc-latency-9976/latency-svc-ldt27 updated: 1 ports\nI0917 07:23:27.140863       1 service.go:301] Service svc-latency-9976/latency-svc-z242z updated: 1 ports\nI0917 07:23:27.148630       1 service.go:301] Service svc-latency-9976/latency-svc-7b2cm updated: 1 ports\nI0917 07:23:27.160547       1 service.go:301] Service svc-latency-9976/latency-svc-r8m9z updated: 1 ports\nI0917 07:23:27.168757       1 service.go:301] Service svc-latency-9976/latency-svc-hptqz updated: 1 ports\nI0917 07:23:27.176093       1 service.go:301] Service svc-latency-9976/latency-svc-2sgkn updated: 1 ports\nI0917 07:23:27.182495       1 service.go:301] Service svc-latency-9976/latency-svc-vzbss updated: 1 ports\nI0917 07:23:27.190249       1 service.go:301] Service svc-latency-9976/latency-svc-wfkxg updated: 1 ports\nI0917 07:23:27.203642       1 service.go:301] Service svc-latency-9976/latency-svc-6sr9x updated: 1 ports\nI0917 07:23:27.210704       1 service.go:301] Service svc-latency-9976/latency-svc-svx28 updated: 1 ports\nI0917 07:23:27.229948       1 service.go:301] Service svc-latency-9976/latency-svc-lmft6 updated: 1 ports\nI0917 07:23:27.233663       1 service.go:301] Service svc-latency-9976/latency-svc-2tb5k updated: 1 ports\nI0917 07:23:27.242768       1 service.go:301] Service svc-latency-9976/latency-svc-ppvx4 updated: 1 ports\nI0917 07:23:27.257210       1 service.go:301] Service svc-latency-9976/latency-svc-6ccnl updated: 1 ports\nI0917 07:23:27.264974       1 service.go:301] Service svc-latency-9976/latency-svc-ck249 updated: 1 ports\nI0917 07:23:27.268889       1 service.go:301] Service svc-latency-9976/latency-svc-lw9dg updated: 1 ports\nI0917 07:23:27.277273       1 service.go:301] Service svc-latency-9976/latency-svc-n79dz updated: 1 ports\nI0917 07:23:27.286994       1 service.go:301] Service svc-latency-9976/latency-svc-9m86s updated: 1 ports\nI0917 07:23:27.295610       1 service.go:301] Service svc-latency-9976/latency-svc-vc7pb updated: 1 ports\nI0917 07:23:27.300113       1 service.go:301] Service svc-latency-9976/latency-svc-thrlx updated: 1 ports\nI0917 07:23:27.313152       1 service.go:301] Service svc-latency-9976/latency-svc-l2zd6 updated: 1 ports\nI0917 07:23:27.320753       1 service.go:301] Service svc-latency-9976/latency-svc-rsps2 updated: 1 ports\nI0917 07:23:27.328814       1 service.go:301] Service svc-latency-9976/latency-svc-f6m9h updated: 1 ports\nI0917 07:23:27.339880       1 service.go:301] Service svc-latency-9976/latency-svc-gqtcg updated: 1 ports\nI0917 07:23:27.346321       1 service.go:301] Service svc-latency-9976/latency-svc-7b5nn updated: 1 ports\nI0917 07:23:27.353975       1 service.go:301] Service svc-latency-9976/latency-svc-np9x6 updated: 1 ports\nI0917 07:23:27.367419       1 service.go:301] Service svc-latency-9976/latency-svc-qzz4p updated: 1 ports\nI0917 07:23:27.379014       1 service.go:301] Service svc-latency-9976/latency-svc-57m5w updated: 1 ports\nI0917 07:23:27.385969       1 service.go:301] Service svc-latency-9976/latency-svc-dnqxd updated: 1 ports\nI0917 07:23:27.393157       1 service.go:301] Service svc-latency-9976/latency-svc-zc77s updated: 1 ports\nI0917 07:23:27.398179       1 service.go:301] Service svc-latency-9976/latency-svc-hkp6l updated: 1 ports\nI0917 07:23:27.404159       1 service.go:301] Service svc-latency-9976/latency-svc-vn92t updated: 1 ports\nI0917 07:23:27.425198       1 service.go:301] Service svc-latency-9976/latency-svc-xs2h7 updated: 1 ports\nI0917 07:23:27.428403       1 service.go:301] Service svc-latency-9976/latency-svc-kbs8h updated: 1 ports\nI0917 07:23:27.435289       1 service.go:301] Service svc-latency-9976/latency-svc-rs2rd updated: 1 ports\nI0917 07:23:27.439157       1 service.go:301] Service svc-latency-9976/latency-svc-9dzg8 updated: 1 ports\nI0917 07:23:27.446579       1 service.go:301] Service svc-latency-9976/latency-svc-sssx4 updated: 1 ports\nI0917 07:23:27.453336       1 service.go:301] Service svc-latency-9976/latency-svc-nn869 updated: 1 ports\nI0917 07:23:27.457798       1 service.go:301] Service svc-latency-9976/latency-svc-tqcsg updated: 1 ports\nI0917 07:23:27.462552       1 service.go:301] Service svc-latency-9976/latency-svc-ngqb8 updated: 1 ports\nI0917 07:23:27.468120       1 service.go:301] Service svc-latency-9976/latency-svc-qmxtg updated: 1 ports\nI0917 07:23:27.485818       1 service.go:301] Service svc-latency-9976/latency-svc-kwf2z updated: 1 ports\nI0917 07:23:27.534413       1 service.go:301] Service svc-latency-9976/latency-svc-kchmk updated: 1 ports\nI0917 07:23:27.577956       1 service.go:301] Service svc-latency-9976/latency-svc-pwws8 updated: 1 ports\nI0917 07:23:27.640666       1 service.go:301] Service svc-latency-9976/latency-svc-z6lcp updated: 1 ports\nI0917 07:23:27.678607       1 service.go:301] Service svc-latency-9976/latency-svc-58dft updated: 1 ports\nI0917 07:23:27.728777       1 service.go:301] Service svc-latency-9976/latency-svc-4zwqg updated: 1 ports\nI0917 07:23:27.776769       1 service.go:301] Service svc-latency-9976/latency-svc-8mw5v updated: 1 ports\nI0917 07:23:27.829055       1 service.go:301] Service svc-latency-9976/latency-svc-87979 updated: 1 ports\nI0917 07:23:27.880597       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-pwws8\" at 100.69.87.0:80/TCP\nI0917 07:23:27.880626       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-bb78l\" at 100.67.114.8:80/TCP\nI0917 07:23:27.880638       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ldt27\" at 100.65.28.183:80/TCP\nI0917 07:23:27.880652       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7b2cm\" at 100.71.195.32:80/TCP\nI0917 07:23:27.880663       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-svx28\" at 100.66.64.88:80/TCP\nI0917 07:23:27.880673       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ngqb8\" at 100.65.151.169:80/TCP\nI0917 07:23:27.880684       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-2sgkn\" at 100.64.231.171:80/TCP\nI0917 07:23:27.880699       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-f6m9h\" at 100.67.131.174:80/TCP\nI0917 07:23:27.880717       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-9dzg8\" at 100.64.220.112:80/TCP\nI0917 07:23:27.880733       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-l2zd6\" at 100.64.12.115:80/TCP\nI0917 07:23:27.880753       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-z242z\" at 100.71.74.116:80/TCP\nI0917 07:23:27.880764       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-np9x6\" at 100.67.129.193:80/TCP\nI0917 07:23:27.880781       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-t78qm\" at 100.68.80.119:80/TCP\nI0917 07:23:27.880795       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-kchmk\" at 100.66.109.76:80/TCP\nI0917 07:23:27.880812       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-qzz4p\" at 100.71.8.106:80/TCP\nI0917 07:23:27.880823       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-xs2h7\" at 100.65.249.117:80/TCP\nI0917 07:23:27.880834       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-sssx4\" at 100.66.71.182:80/TCP\nI0917 07:23:27.880851       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-lqcm2\" at 100.66.177.255:80/TCP\nI0917 07:23:27.880865       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-94dmp\" at 100.65.3.69:80/TCP\nI0917 07:23:27.880880       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-hptqz\" at 100.70.130.12:80/TCP\nI0917 07:23:27.880893       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ck249\" at 100.68.134.145:80/TCP\nI0917 07:23:27.880908       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-9m86s\" at 100.65.137.244:80/TCP\nI0917 07:23:27.880920       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-vc7pb\" at 100.66.100.200:80/TCP\nI0917 07:23:27.880936       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-lpkxz\" at 100.66.175.147:80/TCP\nI0917 07:23:27.880953       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-m7prd\" at 100.68.219.183:80/TCP\nI0917 07:23:27.880966       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-tqcsg\" at 100.71.107.115:80/TCP\nI0917 07:23:27.880980       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-z6lcp\" at 100.67.59.41:80/TCP\nI0917 07:23:27.880990       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-58dft\" at 100.65.58.233:80/TCP\nI0917 07:23:27.881001       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-k2bcg\" at 100.65.215.155:80/TCP\nI0917 07:23:27.881017       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-z646q\" at 100.66.76.211:80/TCP\nI0917 07:23:27.881031       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-lw9dg\" at 100.68.103.108:80/TCP\nI0917 07:23:27.881048       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-rs2rd\" at 100.65.20.55:80/TCP\nI0917 07:23:27.881058       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-4zwqg\" at 100.65.60.207:80/TCP\nI0917 07:23:27.881068       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-t8kcr\" at 100.65.139.106:80/TCP\nI0917 07:23:27.881078       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-vzbss\" at 100.67.229.214:80/TCP\nI0917 07:23:27.881096       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-6sr9x\" at 100.66.208.49:80/TCP\nI0917 07:23:27.881110       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-6ccnl\" at 100.68.246.89:80/TCP\nI0917 07:23:27.881127       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-dnqxd\" at 100.66.83.104:80/TCP\nI0917 07:23:27.881141       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7b5nn\" at 100.65.118.48:80/TCP\nI0917 07:23:27.881151       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-8mw5v\" at 100.66.222.18:80/TCP\nI0917 07:23:27.881163       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-r8mnb\" at 100.64.69.128:80/TCP\nI0917 07:23:27.881185       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-rsps2\" at 100.68.78.80:80/TCP\nI0917 07:23:27.881203       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-nn869\" at 100.67.209.206:80/TCP\nI0917 07:23:27.881218       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-2tb5k\" at 100.71.11.166:80/TCP\nI0917 07:23:27.881228       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-87979\" at 100.70.130.154:80/TCP\nI0917 07:23:27.881238       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-r8m9z\" at 100.64.35.124:80/TCP\nI0917 07:23:27.881254       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-n79dz\" at 100.70.10.10:80/TCP\nI0917 07:23:27.881275       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-thrlx\" at 100.68.56.234:80/TCP\nI0917 07:23:27.881298       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-57m5w\" at 100.66.101.192:80/TCP\nI0917 07:23:27.881307       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-qmxtg\" at 100.65.121.192:80/TCP\nI0917 07:23:27.881318       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-cjztx\" at 100.70.225.173:80/TCP\nI0917 07:23:27.881334       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-wfkxg\" at 100.65.135.44:80/TCP\nI0917 07:23:27.881349       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-zc77s\" at 100.69.40.227:80/TCP\nI0917 07:23:27.881366       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-vn92t\" at 100.67.95.65:80/TCP\nI0917 07:23:27.881376       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-kbs8h\" at 100.65.220.92:80/TCP\nI0917 07:23:27.881386       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-lmft6\" at 100.70.57.240:80/TCP\nI0917 07:23:27.881396       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ppvx4\" at 100.64.241.124:80/TCP\nI0917 07:23:27.881413       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-gqtcg\" at 100.67.29.207:80/TCP\nI0917 07:23:27.881427       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-hkp6l\" at 100.70.90.5:80/TCP\nI0917 07:23:27.881444       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-kwf2z\" at 100.70.155.250:80/TCP\nI0917 07:23:27.881878       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:27.885877       1 service.go:301] Service svc-latency-9976/latency-svc-7c94p updated: 1 ports\nI0917 07:23:27.925468       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"44.875505ms\"\nI0917 07:23:27.940744       1 service.go:301] Service svc-latency-9976/latency-svc-fmbx4 updated: 1 ports\nI0917 07:23:27.981384       1 service.go:301] Service svc-latency-9976/latency-svc-2h6v2 updated: 1 ports\nI0917 07:23:28.032511       1 service.go:301] Service svc-latency-9976/latency-svc-trv2m updated: 1 ports\nI0917 07:23:28.083035       1 service.go:301] Service svc-latency-9976/latency-svc-7276h updated: 1 ports\nI0917 07:23:28.126145       1 service.go:301] Service svc-latency-9976/latency-svc-zltgc updated: 1 ports\nI0917 07:23:28.178018       1 service.go:301] Service svc-latency-9976/latency-svc-mkjhq updated: 1 ports\nI0917 07:23:28.226810       1 service.go:301] Service svc-latency-9976/latency-svc-4xh76 updated: 1 ports\nI0917 07:23:28.278006       1 service.go:301] Service svc-latency-9976/latency-svc-z96d7 updated: 1 ports\nI0917 07:23:28.321262       1 service.go:301] Service svc-latency-9976/latency-svc-bnjzn updated: 1 ports\nI0917 07:23:28.394424       1 service.go:301] Service svc-latency-9976/latency-svc-7rkds updated: 1 ports\nI0917 07:23:28.428251       1 service.go:301] Service svc-latency-9976/latency-svc-sh7tf updated: 1 ports\nI0917 07:23:28.497238       1 service.go:301] Service svc-latency-9976/latency-svc-2h2vw updated: 1 ports\nI0917 07:23:28.567888       1 service.go:301] Service svc-latency-9976/latency-svc-stql2 updated: 1 ports\nI0917 07:23:28.606757       1 service.go:301] Service svc-latency-9976/latency-svc-jfj24 updated: 1 ports\nI0917 07:23:28.655144       1 service.go:301] Service svc-latency-9976/latency-svc-gbt5v updated: 1 ports\nI0917 07:23:28.698320       1 service.go:301] Service svc-latency-9976/latency-svc-lp4q9 updated: 1 ports\nI0917 07:23:28.738529       1 service.go:301] Service svc-latency-9976/latency-svc-tmdv9 updated: 1 ports\nI0917 07:23:28.782625       1 service.go:301] Service svc-latency-9976/latency-svc-7fw72 updated: 1 ports\nI0917 07:23:28.826522       1 service.go:301] Service svc-latency-9976/latency-svc-42n97 updated: 1 ports\nI0917 07:23:28.877355       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-jfj24\" at 100.64.247.56:80/TCP\nI0917 07:23:28.877384       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-mkjhq\" at 100.69.86.87:80/TCP\nI0917 07:23:28.877396       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-zltgc\" at 100.69.6.50:80/TCP\nI0917 07:23:28.877408       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-bnjzn\" at 100.69.143.164:80/TCP\nI0917 07:23:28.877418       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-sh7tf\" at 100.68.246.241:80/TCP\nI0917 07:23:28.877428       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-2h2vw\" at 100.66.193.177:80/TCP\nI0917 07:23:28.877438       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-tmdv9\" at 100.66.96.140:80/TCP\nI0917 07:23:28.877448       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7fw72\" at 100.64.146.28:80/TCP\nI0917 07:23:28.877459       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-2h6v2\" at 100.71.163.245:80/TCP\nI0917 07:23:28.877471       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-z96d7\" at 100.68.161.143:80/TCP\nI0917 07:23:28.877483       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7rkds\" at 100.68.209.42:80/TCP\nI0917 07:23:28.877493       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-stql2\" at 100.68.74.233:80/TCP\nI0917 07:23:28.877503       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-gbt5v\" at 100.65.232.248:80/TCP\nI0917 07:23:28.877513       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7c94p\" at 100.65.54.197:80/TCP\nI0917 07:23:28.877524       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-trv2m\" at 100.65.35.21:80/TCP\nI0917 07:23:28.877534       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7276h\" at 100.67.164.92:80/TCP\nI0917 07:23:28.877544       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-4xh76\" at 100.71.76.221:80/TCP\nI0917 07:23:28.877555       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-lp4q9\" at 100.65.194.85:80/TCP\nI0917 07:23:28.877565       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-42n97\" at 100.65.161.186:80/TCP\nI0917 07:23:28.877574       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-fmbx4\" at 100.67.55.202:80/TCP\nI0917 07:23:28.877914       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:28.878510       1 service.go:301] Service svc-latency-9976/latency-svc-68qqz updated: 1 ports\nI0917 07:23:28.930860       1 service.go:301] Service svc-latency-9976/latency-svc-j4pgt updated: 1 ports\nI0917 07:23:28.946030       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"68.676841ms\"\nI0917 07:23:28.980762       1 service.go:301] Service svc-latency-9976/latency-svc-ndbmj updated: 1 ports\nI0917 07:23:29.027476       1 service.go:301] Service svc-latency-9976/latency-svc-qj4zf updated: 1 ports\nI0917 07:23:29.079444       1 service.go:301] Service svc-latency-9976/latency-svc-6smmm updated: 1 ports\nI0917 07:23:29.140651       1 service.go:301] Service svc-latency-9976/latency-svc-pfdw5 updated: 1 ports\nI0917 07:23:29.173854       1 service.go:301] Service svc-latency-9976/latency-svc-v8d9b updated: 1 ports\nI0917 07:23:29.225152       1 service.go:301] Service svc-latency-9976/latency-svc-75zqq updated: 1 ports\nI0917 07:23:29.280151       1 service.go:301] Service svc-latency-9976/latency-svc-g2w7h updated: 1 ports\nI0917 07:23:29.329141       1 service.go:301] Service svc-latency-9976/latency-svc-zbzdt updated: 1 ports\nI0917 07:23:29.376910       1 service.go:301] Service svc-latency-9976/latency-svc-4c59m updated: 1 ports\nI0917 07:23:29.432861       1 service.go:301] Service svc-latency-9976/latency-svc-z2fnd updated: 1 ports\nI0917 07:23:29.478162       1 service.go:301] Service svc-latency-9976/latency-svc-j57gx updated: 1 ports\nI0917 07:23:29.542593       1 service.go:301] Service svc-latency-9976/latency-svc-gmvhj updated: 1 ports\nI0917 07:23:29.577261       1 service.go:301] Service svc-latency-9976/latency-svc-zdcj4 updated: 1 ports\nI0917 07:23:29.642591       1 service.go:301] Service svc-latency-9976/latency-svc-5g99x updated: 1 ports\nI0917 07:23:29.681283       1 service.go:301] Service svc-latency-9976/latency-svc-cjjhl updated: 1 ports\nI0917 07:23:29.730746       1 service.go:301] Service svc-latency-9976/latency-svc-4qhcf updated: 1 ports\nI0917 07:23:29.788263       1 service.go:301] Service svc-latency-9976/latency-svc-2t7qg updated: 1 ports\nI0917 07:23:29.830541       1 service.go:301] Service svc-latency-9976/latency-svc-kd7bm updated: 1 ports\nI0917 07:23:29.890249       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-zbzdt\" at 100.69.240.5:80/TCP\nI0917 07:23:29.890277       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-4c59m\" at 100.71.191.201:80/TCP\nI0917 07:23:29.890309       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-gmvhj\" at 100.68.240.237:80/TCP\nI0917 07:23:29.890333       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-5g99x\" at 100.64.132.37:80/TCP\nI0917 07:23:29.890343       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-cjjhl\" at 100.69.27.191:80/TCP\nI0917 07:23:29.890354       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-75zqq\" at 100.67.43.72:80/TCP\nI0917 07:23:29.890379       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-j4pgt\" at 100.68.172.185:80/TCP\nI0917 07:23:29.890556       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-zdcj4\" at 100.69.100.174:80/TCP\nI0917 07:23:29.890573       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-kd7bm\" at 100.64.150.251:80/TCP\nI0917 07:23:29.890582       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-68qqz\" at 100.64.163.189:80/TCP\nI0917 07:23:29.890601       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-z2fnd\" at 100.66.4.254:80/TCP\nI0917 07:23:29.890629       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-j57gx\" at 100.67.156.233:80/TCP\nI0917 07:23:29.890640       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-2t7qg\" at 100.70.147.146:80/TCP\nI0917 07:23:29.890653       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-v8d9b\" at 100.65.62.0:80/TCP\nI0917 07:23:29.890663       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-qj4zf\" at 100.64.190.22:80/TCP\nI0917 07:23:29.890671       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-6smmm\" at 100.65.168.82:80/TCP\nI0917 07:23:29.890702       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-pfdw5\" at 100.71.214.13:80/TCP\nI0917 07:23:29.890713       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-g2w7h\" at 100.69.167.187:80/TCP\nI0917 07:23:29.890723       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-4qhcf\" at 100.71.210.55:80/TCP\nI0917 07:23:29.890804       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ndbmj\" at 100.68.191.29:80/TCP\nI0917 07:23:29.891129       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:29.891270       1 service.go:301] Service svc-latency-9976/latency-svc-5mg9n updated: 1 ports\nI0917 07:23:29.934226       1 service.go:301] Service svc-latency-9976/latency-svc-w6n2n updated: 1 ports\nI0917 07:23:29.954946       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"64.72804ms\"\nI0917 07:23:29.984412       1 service.go:301] Service svc-latency-9976/latency-svc-trqrz updated: 1 ports\nI0917 07:23:30.027679       1 service.go:301] Service svc-latency-9976/latency-svc-289vc updated: 1 ports\nI0917 07:23:30.083059       1 service.go:301] Service svc-latency-9976/latency-svc-qq8rc updated: 1 ports\nI0917 07:23:30.121843       1 service.go:301] Service svc-latency-9976/latency-svc-frjvf updated: 1 ports\nI0917 07:23:30.178078       1 service.go:301] Service svc-latency-9976/latency-svc-5jzzz updated: 1 ports\nI0917 07:23:30.226905       1 service.go:301] Service svc-latency-9976/latency-svc-6xshx updated: 1 ports\nI0917 07:23:30.280511       1 service.go:301] Service svc-latency-9976/latency-svc-lm2h6 updated: 1 ports\nI0917 07:23:30.327287       1 service.go:301] Service svc-latency-9976/latency-svc-7gxhs updated: 1 ports\nI0917 07:23:30.384925       1 service.go:301] Service svc-latency-9976/latency-svc-25p2z updated: 1 ports\nI0917 07:23:30.428211       1 service.go:301] Service svc-latency-9976/latency-svc-tvkfd updated: 1 ports\nI0917 07:23:30.479031       1 service.go:301] Service svc-latency-9976/latency-svc-bxg5r updated: 1 ports\nI0917 07:23:30.527016       1 service.go:301] Service svc-latency-9976/latency-svc-rxnmg updated: 1 ports\nI0917 07:23:30.578349       1 service.go:301] Service svc-latency-9976/latency-svc-xr7hp updated: 1 ports\nI0917 07:23:30.627353       1 service.go:301] Service svc-latency-9976/latency-svc-hzfp4 updated: 1 ports\nI0917 07:23:30.677711       1 service.go:301] Service svc-latency-9976/latency-svc-5nvnd updated: 1 ports\nI0917 07:23:30.726949       1 service.go:301] Service svc-latency-9976/latency-svc-ltczx updated: 1 ports\nI0917 07:23:30.774065       1 service.go:301] Service svc-latency-9976/latency-svc-xqjv5 updated: 1 ports\nI0917 07:23:30.876161       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-tvkfd\" at 100.65.94.253:80/TCP\nI0917 07:23:30.876197       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-xr7hp\" at 100.70.198.47:80/TCP\nI0917 07:23:30.876209       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-5nvnd\" at 100.66.227.74:80/TCP\nI0917 07:23:30.876235       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ltczx\" at 100.69.149.199:80/TCP\nI0917 07:23:30.876249       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-289vc\" at 100.64.110.45:80/TCP\nI0917 07:23:30.876264       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-qq8rc\" at 100.70.194.255:80/TCP\nI0917 07:23:30.876284       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-5jzzz\" at 100.70.182.233:80/TCP\nI0917 07:23:30.876301       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-lm2h6\" at 100.69.118.59:80/TCP\nI0917 07:23:30.876316       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-5mg9n\" at 100.69.117.77:80/TCP\nI0917 07:23:30.876329       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-w6n2n\" at 100.67.189.226:80/TCP\nI0917 07:23:30.876343       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-trqrz\" at 100.67.79.9:80/TCP\nI0917 07:23:30.876357       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-hzfp4\" at 100.66.170.13:80/TCP\nI0917 07:23:30.876371       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-6xshx\" at 100.66.99.29:80/TCP\nI0917 07:23:30.876384       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7gxhs\" at 100.64.115.21:80/TCP\nI0917 07:23:30.876397       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-25p2z\" at 100.65.45.181:80/TCP\nI0917 07:23:30.876409       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-bxg5r\" at 100.64.210.56:80/TCP\nI0917 07:23:30.876425       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-frjvf\" at 100.65.119.240:80/TCP\nI0917 07:23:30.876438       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-rxnmg\" at 100.71.28.29:80/TCP\nI0917 07:23:30.876456       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-xqjv5\" at 100.70.85.19:80/TCP\nI0917 07:23:30.876813       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:30.882553       1 service.go:301] Service svc-latency-9976/latency-svc-w5q8c updated: 1 ports\nI0917 07:23:30.928194       1 service.go:301] Service webhook-3199/e2e-test-webhook updated: 1 ports\nI0917 07:23:30.942639       1 service.go:301] Service svc-latency-9976/latency-svc-m8q9l updated: 1 ports\nI0917 07:23:30.946826       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"70.664474ms\"\nI0917 07:23:30.978618       1 service.go:301] Service svc-latency-9976/latency-svc-9bh2h updated: 1 ports\nI0917 07:23:31.028957       1 service.go:301] Service svc-latency-9976/latency-svc-9cqhf updated: 1 ports\nI0917 07:23:31.077419       1 service.go:301] Service svc-latency-9976/latency-svc-2h6n7 updated: 1 ports\nI0917 07:23:31.124872       1 service.go:301] Service svc-latency-9976/latency-svc-sr4rs updated: 1 ports\nI0917 07:23:31.177637       1 service.go:301] Service svc-latency-9976/latency-svc-98n5q updated: 1 ports\nI0917 07:23:31.243071       1 service.go:301] Service svc-latency-9976/latency-svc-xx95w updated: 1 ports\nI0917 07:23:31.281434       1 service.go:301] Service svc-latency-9976/latency-svc-xn2ss updated: 1 ports\nI0917 07:23:31.330730       1 service.go:301] Service svc-latency-9976/latency-svc-qntf7 updated: 1 ports\nI0917 07:23:31.375949       1 service.go:301] Service svc-latency-9976/latency-svc-h7rls updated: 1 ports\nI0917 07:23:31.426837       1 service.go:301] Service svc-latency-9976/latency-svc-jhl9t updated: 1 ports\nI0917 07:23:31.480577       1 service.go:301] Service svc-latency-9976/latency-svc-tp6wx updated: 1 ports\nI0917 07:23:31.522766       1 service.go:301] Service svc-latency-9976/latency-svc-fbqcz updated: 1 ports\nI0917 07:23:31.653900       1 service.go:301] Service svc-latency-9976/latency-svc-7sl2r updated: 1 ports\nI0917 07:23:31.661030       1 service.go:301] Service svc-latency-9976/latency-svc-rfpnf updated: 1 ports\nI0917 07:23:31.737257       1 service.go:301] Service svc-latency-9976/latency-svc-g8rvc updated: 1 ports\nI0917 07:23:31.777621       1 service.go:301] Service svc-latency-9976/latency-svc-ntbz7 updated: 1 ports\nI0917 07:23:31.830618       1 service.go:301] Service svc-latency-9976/latency-svc-59zz9 updated: 1 ports\nI0917 07:23:31.871795       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-98n5q\" at 100.71.36.181:80/TCP\nI0917 07:23:31.871823       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-fbqcz\" at 100.66.0.75:80/TCP\nI0917 07:23:31.871834       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-g8rvc\" at 100.66.40.206:80/TCP\nI0917 07:23:31.871843       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-w5q8c\" at 100.67.34.224:80/TCP\nI0917 07:23:31.871853       1 service.go:416] Adding new service port \"webhook-3199/e2e-test-webhook\" at 100.65.88.9:8443/TCP\nI0917 07:23:31.871863       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-m8q9l\" at 100.71.163.199:80/TCP\nI0917 07:23:31.871875       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-9cqhf\" at 100.68.255.254:80/TCP\nI0917 07:23:31.871886       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-2h6n7\" at 100.70.156.45:80/TCP\nI0917 07:23:31.871902       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-xx95w\" at 100.66.204.162:80/TCP\nI0917 07:23:31.871912       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-xn2ss\" at 100.69.122.144:80/TCP\nI0917 07:23:31.871923       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-rfpnf\" at 100.69.249.58:80/TCP\nI0917 07:23:31.871942       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7sl2r\" at 100.70.17.102:80/TCP\nI0917 07:23:31.871956       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ntbz7\" at 100.65.124.39:80/TCP\nI0917 07:23:31.871974       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-9bh2h\" at 100.66.125.131:80/TCP\nI0917 07:23:31.871984       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-sr4rs\" at 100.67.205.246:80/TCP\nI0917 07:23:31.871994       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-qntf7\" at 100.64.114.14:80/TCP\nI0917 07:23:31.872006       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-h7rls\" at 100.70.205.161:80/TCP\nI0917 07:23:31.872022       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-jhl9t\" at 100.68.86.151:80/TCP\nI0917 07:23:31.872037       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-tp6wx\" at 100.67.66.248:80/TCP\nI0917 07:23:31.872053       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-59zz9\" at 100.65.126.183:80/TCP\nI0917 07:23:31.872453       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:31.880288       1 service.go:301] Service svc-latency-9976/latency-svc-75q6g updated: 1 ports\nI0917 07:23:31.936558       1 service.go:301] Service svc-latency-9976/latency-svc-v5g79 updated: 1 ports\nI0917 07:23:31.975799       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"104.000377ms\"\nI0917 07:23:32.031590       1 service.go:301] Service svc-latency-9976/latency-svc-w4sdl updated: 1 ports\nI0917 07:23:32.075892       1 service.go:301] Service svc-latency-9976/latency-svc-z8zg8 updated: 1 ports\nI0917 07:23:32.161079       1 service.go:301] Service svc-latency-9976/latency-svc-g8dmn updated: 1 ports\nI0917 07:23:32.197728       1 service.go:301] Service svc-latency-9976/latency-svc-8wlxb updated: 1 ports\nI0917 07:23:32.233200       1 service.go:301] Service svc-latency-9976/latency-svc-kptlp updated: 1 ports\nI0917 07:23:32.287392       1 service.go:301] Service svc-latency-9976/latency-svc-vt7cc updated: 1 ports\nI0917 07:23:32.334209       1 service.go:301] Service svc-latency-9976/latency-svc-j2hbk updated: 1 ports\nI0917 07:23:32.376556       1 service.go:301] Service svc-latency-9976/latency-svc-fhg5k updated: 1 ports\nI0917 07:23:32.430015       1 service.go:301] Service svc-latency-9976/latency-svc-b7lz2 updated: 1 ports\nI0917 07:23:32.483336       1 service.go:301] Service svc-latency-9976/latency-svc-brs9c updated: 1 ports\nI0917 07:23:32.526550       1 service.go:301] Service svc-latency-9976/latency-svc-dmgl8 updated: 1 ports\nI0917 07:23:32.577810       1 service.go:301] Service svc-latency-9976/latency-svc-hlgbl updated: 1 ports\nI0917 07:23:32.640999       1 service.go:301] Service svc-latency-9976/latency-svc-7pffm updated: 1 ports\nI0917 07:23:32.686746       1 service.go:301] Service svc-latency-9976/latency-svc-n59fg updated: 1 ports\nI0917 07:23:32.731617       1 service.go:301] Service svc-latency-9976/latency-svc-v7g9z updated: 1 ports\nI0917 07:23:32.784356       1 service.go:301] Service svc-latency-9976/latency-svc-c5nc5 updated: 1 ports\nI0917 07:23:32.824291       1 service.go:301] Service svc-latency-9976/latency-svc-g7nxg updated: 1 ports\nI0917 07:23:32.874385       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-fhg5k\" at 100.69.30.240:80/TCP\nI0917 07:23:32.874413       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-brs9c\" at 100.70.110.65:80/TCP\nI0917 07:23:32.874424       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-v7g9z\" at 100.69.11.21:80/TCP\nI0917 07:23:32.874435       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-g7nxg\" at 100.71.8.36:80/TCP\nI0917 07:23:32.874445       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-v5g79\" at 100.70.140.48:80/TCP\nI0917 07:23:32.874582       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-j2hbk\" at 100.66.41.255:80/TCP\nI0917 07:23:32.874599       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-75q6g\" at 100.68.153.254:80/TCP\nI0917 07:23:32.874667       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-w4sdl\" at 100.68.160.189:80/TCP\nI0917 07:23:32.874684       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-z8zg8\" at 100.64.10.190:80/TCP\nI0917 07:23:32.874740       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-g8dmn\" at 100.64.192.191:80/TCP\nI0917 07:23:32.874758       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-8wlxb\" at 100.71.193.8:80/TCP\nI0917 07:23:32.874808       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-kptlp\" at 100.70.122.206:80/TCP\nI0917 07:23:32.874820       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-vt7cc\" at 100.68.210.164:80/TCP\nI0917 07:23:32.874833       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-b7lz2\" at 100.70.90.41:80/TCP\nI0917 07:23:32.874851       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7pffm\" at 100.67.140.212:80/TCP\nI0917 07:23:32.874861       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-n59fg\" at 100.64.92.222:80/TCP\nI0917 07:23:32.874872       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-dmgl8\" at 100.68.29.72:80/TCP\nI0917 07:23:32.874881       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-hlgbl\" at 100.67.5.23:80/TCP\nI0917 07:23:32.874891       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-c5nc5\" at 100.65.42.80:80/TCP\nI0917 07:23:32.875145       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:32.885453       1 service.go:301] Service svc-latency-9976/latency-svc-9mj78 updated: 1 ports\nI0917 07:23:32.937244       1 service.go:301] Service svc-latency-9976/latency-svc-rbqcf updated: 1 ports\nI0917 07:23:32.952697       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"78.310881ms\"\nI0917 07:23:32.981492       1 service.go:301] Service svc-latency-9976/latency-svc-78c8q updated: 1 ports\nI0917 07:23:33.031414       1 service.go:301] Service svc-latency-9976/latency-svc-w44xj updated: 1 ports\nI0917 07:23:33.079684       1 service.go:301] Service svc-latency-9976/latency-svc-cxq2c updated: 1 ports\nI0917 07:23:33.129268       1 service.go:301] Service svc-latency-9976/latency-svc-l5qj6 updated: 1 ports\nI0917 07:23:33.174806       1 service.go:301] Service svc-latency-9976/latency-svc-j7m8c updated: 1 ports\nI0917 07:23:33.229487       1 service.go:301] Service svc-latency-9976/latency-svc-4hxjw updated: 1 ports\nI0917 07:23:33.276141       1 service.go:301] Service svc-latency-9976/latency-svc-x4z4d updated: 1 ports\nI0917 07:23:33.331545       1 service.go:301] Service svc-latency-9976/latency-svc-gvb8z updated: 1 ports\nI0917 07:23:33.378492       1 service.go:301] Service svc-latency-9976/latency-svc-clhzz updated: 1 ports\nI0917 07:23:33.430477       1 service.go:301] Service svc-latency-9976/latency-svc-bnhqm updated: 1 ports\nI0917 07:23:33.475656       1 service.go:301] Service svc-latency-9976/latency-svc-dvvcr updated: 1 ports\nI0917 07:23:33.535076       1 service.go:301] Service svc-latency-9976/latency-svc-q95rk updated: 1 ports\nI0917 07:23:33.577968       1 service.go:301] Service svc-latency-9976/latency-svc-9jn7r updated: 1 ports\nI0917 07:23:33.630755       1 service.go:301] Service svc-latency-9976/latency-svc-tpb4d updated: 1 ports\nI0917 07:23:33.674979       1 service.go:301] Service svc-latency-9976/latency-svc-hxmgd updated: 1 ports\nI0917 07:23:33.725242       1 service.go:301] Service svc-latency-9976/latency-svc-6mvsh updated: 1 ports\nI0917 07:23:33.744555       1 service.go:301] Service webhook-3199/e2e-test-webhook updated: 0 ports\nI0917 07:23:33.779663       1 service.go:301] Service svc-latency-9976/latency-svc-62nld updated: 1 ports\nI0917 07:23:33.828227       1 service.go:301] Service svc-latency-9976/latency-svc-ddvf6 updated: 1 ports\nI0917 07:23:33.876702       1 service.go:301] Service svc-latency-9976/latency-svc-ndwmx updated: 1 ports\nI0917 07:23:33.876755       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ddvf6\" at 100.66.97.78:80/TCP\nI0917 07:23:33.876772       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-cxq2c\" at 100.71.244.255:80/TCP\nI0917 07:23:33.876787       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-l5qj6\" at 100.66.93.26:80/TCP\nI0917 07:23:33.876799       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-j7m8c\" at 100.66.176.1:80/TCP\nI0917 07:23:33.876809       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-tpb4d\" at 100.68.219.241:80/TCP\nI0917 07:23:33.876819       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-6mvsh\" at 100.65.183.236:80/TCP\nI0917 07:23:33.876829       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-rbqcf\" at 100.67.101.97:80/TCP\nI0917 07:23:33.876839       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-gvb8z\" at 100.70.43.148:80/TCP\nI0917 07:23:33.876846       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-dvvcr\" at 100.66.208.107:80/TCP\nI0917 07:23:33.876853       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-q95rk\" at 100.68.164.100:80/TCP\nI0917 07:23:33.876863       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-clhzz\" at 100.70.252.114:80/TCP\nI0917 07:23:33.876877       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-hxmgd\" at 100.68.34.225:80/TCP\nI0917 07:23:33.876886       1 service.go:441] Removing service port \"webhook-3199/e2e-test-webhook\"\nI0917 07:23:33.876894       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-62nld\" at 100.70.142.120:80/TCP\nI0917 07:23:33.876903       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-9mj78\" at 100.69.173.72:80/TCP\nI0917 07:23:33.876910       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-78c8q\" at 100.70.11.127:80/TCP\nI0917 07:23:33.876916       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-w44xj\" at 100.71.172.126:80/TCP\nI0917 07:23:33.876922       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-x4z4d\" at 100.66.38.94:80/TCP\nI0917 07:23:33.876928       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-ndwmx\" at 100.71.72.218:80/TCP\nI0917 07:23:33.876935       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-4hxjw\" at 100.69.153.192:80/TCP\nI0917 07:23:33.876945       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-bnhqm\" at 100.66.215.178:80/TCP\nI0917 07:23:33.876954       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-9jn7r\" at 100.65.179.238:80/TCP\nI0917 07:23:33.877222       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:33.934416       1 service.go:301] Service svc-latency-9976/latency-svc-nkt8k updated: 1 ports\nI0917 07:23:33.989072       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"112.308674ms\"\nI0917 07:23:33.989112       1 service.go:301] Service svc-latency-9976/latency-svc-k729v updated: 1 ports\nI0917 07:23:34.030021       1 service.go:301] Service svc-latency-9976/latency-svc-wngh7 updated: 1 ports\nI0917 07:23:34.080302       1 service.go:301] Service svc-latency-9976/latency-svc-jh54j updated: 1 ports\nI0917 07:23:34.126510       1 service.go:301] Service svc-latency-9976/latency-svc-jwx6q updated: 1 ports\nI0917 07:23:34.180697       1 service.go:301] Service svc-latency-9976/latency-svc-wblwm updated: 1 ports\nI0917 07:23:34.228215       1 service.go:301] Service svc-latency-9976/latency-svc-k2fgz updated: 1 ports\nI0917 07:23:34.282262       1 service.go:301] Service svc-latency-9976/latency-svc-hdnv4 updated: 1 ports\nI0917 07:23:34.333524       1 service.go:301] Service svc-latency-9976/latency-svc-tj9jr updated: 1 ports\nI0917 07:23:34.374657       1 service.go:301] Service svc-latency-9976/latency-svc-qh62l updated: 1 ports\nI0917 07:23:34.429300       1 service.go:301] Service svc-latency-9976/latency-svc-lr66z updated: 1 ports\nI0917 07:23:34.487243       1 service.go:301] Service svc-latency-9976/latency-svc-f5pph updated: 1 ports\nI0917 07:23:34.607485       1 service.go:301] Service svc-latency-9976/latency-svc-pv5m5 updated: 1 ports\nI0917 07:23:34.626141       1 service.go:301] Service svc-latency-9976/latency-svc-2dw7k updated: 1 ports\nI0917 07:23:34.677354       1 service.go:301] Service svc-latency-9976/latency-svc-xjp64 updated: 1 ports\nI0917 07:23:34.725792       1 service.go:301] Service svc-latency-9976/latency-svc-v2l8t updated: 1 ports\nI0917 07:23:34.781900       1 service.go:301] Service svc-latency-9976/latency-svc-hwbj7 updated: 1 ports\nI0917 07:23:34.831271       1 service.go:301] Service svc-latency-9976/latency-svc-5wfj9 updated: 1 ports\nI0917 07:23:34.875149       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-jh54j\" at 100.71.217.164:80/TCP\nI0917 07:23:34.875181       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-jwx6q\" at 100.68.120.203:80/TCP\nI0917 07:23:34.875192       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-f5pph\" at 100.65.98.165:80/TCP\nI0917 07:23:34.875201       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-pv5m5\" at 100.70.11.44:80/TCP\nI0917 07:23:34.875211       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-v2l8t\" at 100.66.174.101:80/TCP\nI0917 07:23:34.875223       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-nkt8k\" at 100.69.215.199:80/TCP\nI0917 07:23:34.875236       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-k729v\" at 100.65.140.241:80/TCP\nI0917 07:23:34.875246       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-k2fgz\" at 100.65.206.217:80/TCP\nI0917 07:23:34.875257       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-lr66z\" at 100.71.118.228:80/TCP\nI0917 07:23:34.875271       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-wngh7\" at 100.66.124.9:80/TCP\nI0917 07:23:34.875284       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-hdnv4\" at 100.66.68.88:80/TCP\nI0917 07:23:34.875300       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-qh62l\" at 100.69.218.7:80/TCP\nI0917 07:23:34.875310       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-wblwm\" at 100.67.139.219:80/TCP\nI0917 07:23:34.875320       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-tj9jr\" at 100.70.29.223:80/TCP\nI0917 07:23:34.875333       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-2dw7k\" at 100.68.127.190:80/TCP\nI0917 07:23:34.875347       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-xjp64\" at 100.66.146.67:80/TCP\nI0917 07:23:34.875363       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-hwbj7\" at 100.64.122.82:80/TCP\nI0917 07:23:34.875378       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-5wfj9\" at 100.64.213.198:80/TCP\nI0917 07:23:34.875737       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:34.881670       1 service.go:301] Service svc-latency-9976/latency-svc-68vrt updated: 1 ports\nI0917 07:23:34.942065       1 service.go:301] Service svc-latency-9976/latency-svc-nqrtn updated: 1 ports\nI0917 07:23:34.949747       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"74.601448ms\"\nI0917 07:23:34.982588       1 service.go:301] Service svc-latency-9976/latency-svc-wbnss updated: 1 ports\nI0917 07:23:35.042665       1 service.go:301] Service svc-latency-9976/latency-svc-7zlwg updated: 1 ports\nI0917 07:23:35.949948       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-nqrtn\" at 100.70.249.69:80/TCP\nI0917 07:23:35.949977       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-wbnss\" at 100.71.157.124:80/TCP\nI0917 07:23:35.949989       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-7zlwg\" at 100.68.166.4:80/TCP\nI0917 07:23:35.950001       1 service.go:416] Adding new service port \"svc-latency-9976/latency-svc-68vrt\" at 100.69.233.2:80/TCP\nI0917 07:23:35.950341       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:36.063543       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"113.631222ms\"\nI0917 07:23:40.896958       1 service.go:301] Service svc-latency-9976/latency-svc-25p2z updated: 0 ports\nI0917 07:23:40.897055       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-25p2z\"\nI0917 07:23:40.897216       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:40.929951       1 service.go:301] Service svc-latency-9976/latency-svc-289vc updated: 0 ports\nI0917 07:23:40.955877       1 service.go:301] Service svc-latency-9976/latency-svc-2dw7k updated: 0 ports\nI0917 07:23:40.973533       1 service.go:301] Service svc-latency-9976/latency-svc-2h2vw updated: 0 ports\nI0917 07:23:41.000663       1 service.go:301] Service svc-latency-9976/latency-svc-2h6n7 updated: 0 ports\nI0917 07:23:41.017544       1 service.go:301] Service svc-latency-9976/latency-svc-2h6v2 updated: 0 ports\nI0917 07:23:41.036654       1 service.go:301] Service svc-latency-9976/latency-svc-2sgkn updated: 0 ports\nI0917 07:23:41.055777       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"158.679363ms\"\nI0917 07:23:41.055821       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-2h6v2\"\nI0917 07:23:41.055853       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-2sgkn\"\nI0917 07:23:41.055862       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-289vc\"\nI0917 07:23:41.055871       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-2dw7k\"\nI0917 07:23:41.055880       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-2h2vw\"\nI0917 07:23:41.055888       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-2h6n7\"\nI0917 07:23:41.056639       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:41.058863       1 service.go:301] Service svc-latency-9976/latency-svc-2t7qg updated: 0 ports\nI0917 07:23:41.079721       1 service.go:301] Service svc-latency-9976/latency-svc-2tb5k updated: 0 ports\nI0917 07:23:41.102180       1 service.go:301] Service svc-latency-9976/latency-svc-42n97 updated: 0 ports\nI0917 07:23:41.133685       1 service.go:301] Service svc-latency-9976/latency-svc-4c59m updated: 0 ports\nI0917 07:23:41.170816       1 service.go:301] Service svc-latency-9976/latency-svc-4hxjw updated: 0 ports\nI0917 07:23:41.172386       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"116.5609ms\"\nI0917 07:23:41.183029       1 service.go:301] Service svc-latency-9976/latency-svc-4qhcf updated: 0 ports\nI0917 07:23:41.195104       1 service.go:301] Service svc-latency-9976/latency-svc-4xh76 updated: 0 ports\nI0917 07:23:41.222371       1 service.go:301] Service svc-latency-9976/latency-svc-4zwqg updated: 0 ports\nI0917 07:23:41.236590       1 service.go:301] Service svc-latency-9976/latency-svc-57m5w updated: 0 ports\nI0917 07:23:41.251192       1 service.go:301] Service svc-latency-9976/latency-svc-58dft updated: 0 ports\nI0917 07:23:41.281708       1 service.go:301] Service svc-latency-9976/latency-svc-59zz9 updated: 0 ports\nI0917 07:23:41.319793       1 service.go:301] Service svc-latency-9976/latency-svc-5g99x updated: 0 ports\nI0917 07:23:41.342205       1 service.go:301] Service svc-latency-9976/latency-svc-5jzzz updated: 0 ports\nI0917 07:23:41.371799       1 service.go:301] Service svc-latency-9976/latency-svc-5mg9n updated: 0 ports\nI0917 07:23:41.381130       1 service.go:301] Service svc-latency-9976/latency-svc-5nvnd updated: 0 ports\nI0917 07:23:41.393805       1 service.go:301] Service svc-latency-9976/latency-svc-5wfj9 updated: 0 ports\nI0917 07:23:41.404889       1 service.go:301] Service svc-latency-9976/latency-svc-62nld updated: 0 ports\nI0917 07:23:41.418857       1 service.go:301] Service svc-latency-9976/latency-svc-68qqz updated: 0 ports\nI0917 07:23:41.427478       1 service.go:301] Service svc-latency-9976/latency-svc-68vrt updated: 0 ports\nI0917 07:23:41.435591       1 service.go:301] Service svc-latency-9976/latency-svc-6ccnl updated: 0 ports\nI0917 07:23:41.446543       1 service.go:301] Service svc-latency-9976/latency-svc-6mvsh updated: 0 ports\nI0917 07:23:41.456327       1 service.go:301] Service svc-latency-9976/latency-svc-6smmm updated: 0 ports\nI0917 07:23:41.467946       1 service.go:301] Service svc-latency-9976/latency-svc-6sr9x updated: 0 ports\nI0917 07:23:41.484531       1 service.go:301] Service svc-latency-9976/latency-svc-6xshx updated: 0 ports\nI0917 07:23:41.494353       1 service.go:301] Service svc-latency-9976/latency-svc-7276h updated: 0 ports\nI0917 07:23:41.505907       1 service.go:301] Service svc-latency-9976/latency-svc-75q6g updated: 0 ports\nI0917 07:23:41.518375       1 service.go:301] Service svc-latency-9976/latency-svc-75zqq updated: 0 ports\nI0917 07:23:41.530760       1 service.go:301] Service svc-latency-9976/latency-svc-78c8q updated: 0 ports\nI0917 07:23:41.537988       1 service.go:301] Service svc-latency-9976/latency-svc-7b2cm updated: 0 ports\nI0917 07:23:41.545295       1 service.go:301] Service svc-latency-9976/latency-svc-7b5nn updated: 0 ports\nI0917 07:23:41.555481       1 service.go:301] Service svc-latency-9976/latency-svc-7c94p updated: 0 ports\nI0917 07:23:41.563190       1 service.go:301] Service svc-latency-9976/latency-svc-7fw72 updated: 0 ports\nI0917 07:23:41.571215       1 service.go:301] Service svc-latency-9976/latency-svc-7gxhs updated: 0 ports\nI0917 07:23:41.590133       1 service.go:301] Service svc-latency-9976/latency-svc-7pffm updated: 0 ports\nI0917 07:23:41.628817       1 service.go:301] Service svc-latency-9976/latency-svc-7rkds updated: 0 ports\nI0917 07:23:41.663537       1 service.go:301] Service svc-latency-9976/latency-svc-7sl2r updated: 0 ports\nI0917 07:23:41.703251       1 service.go:301] Service svc-latency-9976/latency-svc-7zlwg updated: 0 ports\nI0917 07:23:41.736632       1 service.go:301] Service svc-latency-9976/latency-svc-87979 updated: 0 ports\nI0917 07:23:41.765847       1 service.go:301] Service svc-latency-9976/latency-svc-8mw5v updated: 0 ports\nI0917 07:23:41.780728       1 service.go:301] Service svc-latency-9976/latency-svc-8wlxb updated: 0 ports\nI0917 07:23:41.789234       1 service.go:301] Service svc-latency-9976/latency-svc-94dmp updated: 0 ports\nI0917 07:23:41.802261       1 service.go:301] Service svc-latency-9976/latency-svc-98n5q updated: 0 ports\nI0917 07:23:41.810230       1 service.go:301] Service svc-latency-9976/latency-svc-9bh2h updated: 0 ports\nI0917 07:23:41.825160       1 service.go:301] Service svc-latency-9976/latency-svc-9cqhf updated: 0 ports\nI0917 07:23:41.833017       1 service.go:301] Service svc-latency-9976/latency-svc-9dzg8 updated: 0 ports\nI0917 07:23:41.845830       1 service.go:301] Service svc-latency-9976/latency-svc-9jn7r updated: 0 ports\nI0917 07:23:41.855881       1 service.go:301] Service svc-latency-9976/latency-svc-9m86s updated: 0 ports\nI0917 07:23:41.863798       1 service.go:301] Service svc-latency-9976/latency-svc-9mj78 updated: 0 ports\nI0917 07:23:41.871528       1 service.go:301] Service svc-latency-9976/latency-svc-b7lz2 updated: 0 ports\nI0917 07:23:41.879520       1 service.go:301] Service svc-latency-9976/latency-svc-bb78l updated: 0 ports\nI0917 07:23:41.888926       1 service.go:301] Service svc-latency-9976/latency-svc-bnhqm updated: 0 ports\nI0917 07:23:41.912659       1 service.go:301] Service svc-latency-9976/latency-svc-bnjzn updated: 0 ports\nI0917 07:23:41.912723       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-5mg9n\"\nI0917 07:23:41.912772       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-6mvsh\"\nI0917 07:23:41.912792       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7b5nn\"\nI0917 07:23:41.912799       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7rkds\"\nI0917 07:23:41.912806       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-9bh2h\"\nI0917 07:23:41.912841       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-98n5q\"\nI0917 07:23:41.912852       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-4xh76\"\nI0917 07:23:41.912859       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-4zwqg\"\nI0917 07:23:41.912866       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-62nld\"\nI0917 07:23:41.912875       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-6sr9x\"\nI0917 07:23:41.912882       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-78c8q\"\nI0917 07:23:41.912919       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7fw72\"\nI0917 07:23:41.912931       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-94dmp\"\nI0917 07:23:41.912940       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-9cqhf\"\nI0917 07:23:41.912947       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-42n97\"\nI0917 07:23:41.912956       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-4hxjw\"\nI0917 07:23:41.912994       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-5wfj9\"\nI0917 07:23:41.913004       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-6smmm\"\nI0917 07:23:41.913011       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7276h\"\nI0917 07:23:41.913020       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-75zqq\"\nI0917 07:23:41.913037       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7gxhs\"\nI0917 07:23:41.913072       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-5jzzz\"\nI0917 07:23:41.913082       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-5nvnd\"\nI0917 07:23:41.913090       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-6ccnl\"\nI0917 07:23:41.913098       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7c94p\"\nI0917 07:23:41.913116       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-8mw5v\"\nI0917 07:23:41.913126       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-8wlxb\"\nI0917 07:23:41.913158       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-9mj78\"\nI0917 07:23:41.913169       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-9dzg8\"\nI0917 07:23:41.913188       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-2t7qg\"\nI0917 07:23:41.913198       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-2tb5k\"\nI0917 07:23:41.913207       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-4c59m\"\nI0917 07:23:41.913214       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-58dft\"\nI0917 07:23:41.913256       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-6xshx\"\nI0917 07:23:41.913269       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7pffm\"\nI0917 07:23:41.913277       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7sl2r\"\nI0917 07:23:41.913285       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-b7lz2\"\nI0917 07:23:41.913297       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-5g99x\"\nI0917 07:23:41.913331       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-68vrt\"\nI0917 07:23:41.913343       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7zlwg\"\nI0917 07:23:41.913352       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-bnjzn\"\nI0917 07:23:41.913359       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-4qhcf\"\nI0917 07:23:41.913380       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-87979\"\nI0917 07:23:41.913411       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-9jn7r\"\nI0917 07:23:41.913432       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-9m86s\"\nI0917 07:23:41.913443       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-57m5w\"\nI0917 07:23:41.913451       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-59zz9\"\nI0917 07:23:41.913459       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-68qqz\"\nI0917 07:23:41.913467       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-75q6g\"\nI0917 07:23:41.913500       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-7b2cm\"\nI0917 07:23:41.913512       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-bb78l\"\nI0917 07:23:41.913522       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-bnhqm\"\nI0917 07:23:41.913916       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:41.933944       1 service.go:301] Service svc-latency-9976/latency-svc-brs9c updated: 0 ports\nI0917 07:23:41.950139       1 service.go:301] Service svc-latency-9976/latency-svc-bxg5r updated: 0 ports\nI0917 07:23:41.967836       1 service.go:301] Service svc-latency-9976/latency-svc-c5nc5 updated: 0 ports\nI0917 07:23:41.979745       1 service.go:301] Service svc-latency-9976/latency-svc-cjjhl updated: 0 ports\nI0917 07:23:41.987736       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"75.000508ms\"\nI0917 07:23:41.992188       1 service.go:301] Service svc-latency-9976/latency-svc-cjztx updated: 0 ports\nI0917 07:23:42.017449       1 service.go:301] Service svc-latency-9976/latency-svc-ck249 updated: 0 ports\nI0917 07:23:42.025010       1 service.go:301] Service svc-latency-9976/latency-svc-clhzz updated: 0 ports\nI0917 07:23:42.032664       1 service.go:301] Service svc-latency-9976/latency-svc-cxq2c updated: 0 ports\nI0917 07:23:42.042914       1 service.go:301] Service svc-latency-9976/latency-svc-ddvf6 updated: 0 ports\nI0917 07:23:42.053269       1 service.go:301] Service svc-latency-9976/latency-svc-dmgl8 updated: 0 ports\nI0917 07:23:42.063556       1 service.go:301] Service svc-latency-9976/latency-svc-dnqxd updated: 0 ports\nI0917 07:23:42.071138       1 service.go:301] Service svc-latency-9976/latency-svc-dvvcr updated: 0 ports\nI0917 07:23:42.085389       1 service.go:301] Service svc-latency-9976/latency-svc-f5pph updated: 0 ports\nI0917 07:23:42.098555       1 service.go:301] Service svc-latency-9976/latency-svc-f6m9h updated: 0 ports\nI0917 07:23:42.109028       1 service.go:301] Service svc-latency-9976/latency-svc-fbqcz updated: 0 ports\nI0917 07:23:42.117494       1 service.go:301] Service svc-latency-9976/latency-svc-fhg5k updated: 0 ports\nI0917 07:23:42.135663       1 service.go:301] Service svc-latency-9976/latency-svc-fmbx4 updated: 0 ports\nI0917 07:23:42.146007       1 service.go:301] Service svc-latency-9976/latency-svc-frjvf updated: 0 ports\nI0917 07:23:42.158816       1 service.go:301] Service svc-latency-9976/latency-svc-g2w7h updated: 0 ports\nI0917 07:23:42.167131       1 service.go:301] Service svc-latency-9976/latency-svc-g7nxg updated: 0 ports\nI0917 07:23:42.174420       1 service.go:301] Service svc-latency-9976/latency-svc-g8dmn updated: 0 ports\nI0917 07:23:42.182407       1 service.go:301] Service svc-latency-9976/latency-svc-g8rvc updated: 0 ports\nI0917 07:23:42.195778       1 service.go:301] Service svc-latency-9976/latency-svc-gbt5v updated: 0 ports\nI0917 07:23:42.204077       1 service.go:301] Service svc-latency-9976/latency-svc-gmvhj updated: 0 ports\nI0917 07:23:42.214128       1 service.go:301] Service svc-latency-9976/latency-svc-gqtcg updated: 0 ports\nI0917 07:23:42.221207       1 service.go:301] Service svc-latency-9976/latency-svc-gvb8z updated: 0 ports\nI0917 07:23:42.237670       1 service.go:301] Service svc-latency-9976/latency-svc-h7rls updated: 0 ports\nI0917 07:23:42.246112       1 service.go:301] Service svc-latency-9976/latency-svc-hdnv4 updated: 0 ports\nI0917 07:23:42.255569       1 service.go:301] Service svc-latency-9976/latency-svc-hkp6l updated: 0 ports\nI0917 07:23:42.263677       1 service.go:301] Service svc-latency-9976/latency-svc-hlgbl updated: 0 ports\nI0917 07:23:42.269991       1 service.go:301] Service svc-latency-9976/latency-svc-hptqz updated: 0 ports\nI0917 07:23:42.278985       1 service.go:301] Service svc-latency-9976/latency-svc-hwbj7 updated: 0 ports\nI0917 07:23:42.287477       1 service.go:301] Service svc-latency-9976/latency-svc-hxmgd updated: 0 ports\nI0917 07:23:42.294727       1 service.go:301] Service svc-latency-9976/latency-svc-hzfp4 updated: 0 ports\nI0917 07:23:42.303146       1 service.go:301] Service svc-latency-9976/latency-svc-j2hbk updated: 0 ports\nI0917 07:23:42.311701       1 service.go:301] Service svc-latency-9976/latency-svc-j4pgt updated: 0 ports\nI0917 07:23:42.318430       1 service.go:301] Service svc-latency-9976/latency-svc-j57gx updated: 0 ports\nI0917 07:23:42.325576       1 service.go:301] Service svc-latency-9976/latency-svc-j7m8c updated: 0 ports\nI0917 07:23:42.340846       1 service.go:301] Service svc-latency-9976/latency-svc-jfj24 updated: 0 ports\nI0917 07:23:42.356703       1 service.go:301] Service svc-latency-9976/latency-svc-jh54j updated: 0 ports\nI0917 07:23:42.375853       1 service.go:301] Service svc-latency-9976/latency-svc-jhl9t updated: 0 ports\nI0917 07:23:42.396019       1 service.go:301] Service svc-latency-9976/latency-svc-jwx6q updated: 0 ports\nI0917 07:23:42.411383       1 service.go:301] Service svc-latency-9976/latency-svc-k2bcg updated: 0 ports\nI0917 07:23:42.426664       1 service.go:301] Service svc-latency-9976/latency-svc-k2fgz updated: 0 ports\nI0917 07:23:42.439741       1 service.go:301] Service svc-latency-9976/latency-svc-k729v updated: 0 ports\nI0917 07:23:42.456900       1 service.go:301] Service svc-latency-9976/latency-svc-kbs8h updated: 0 ports\nI0917 07:23:42.467645       1 service.go:301] Service svc-latency-9976/latency-svc-kchmk updated: 0 ports\nI0917 07:23:42.481566       1 service.go:301] Service svc-latency-9976/latency-svc-kd7bm updated: 0 ports\nI0917 07:23:42.489532       1 service.go:301] Service svc-latency-9976/latency-svc-kptlp updated: 0 ports\nI0917 07:23:42.514159       1 service.go:301] Service svc-latency-9976/latency-svc-kwf2z updated: 0 ports\nI0917 07:23:42.526620       1 service.go:301] Service svc-latency-9976/latency-svc-l2zd6 updated: 0 ports\nI0917 07:23:42.547084       1 service.go:301] Service svc-latency-9976/latency-svc-l5qj6 updated: 0 ports\nI0917 07:23:42.569135       1 service.go:301] Service svc-latency-9976/latency-svc-ldt27 updated: 0 ports\nI0917 07:23:42.574569       1 service.go:301] Service svc-latency-9976/latency-svc-lm2h6 updated: 0 ports\nI0917 07:23:42.582011       1 service.go:301] Service svc-latency-9976/latency-svc-lmft6 updated: 0 ports\nI0917 07:23:42.591102       1 service.go:301] Service svc-latency-9976/latency-svc-lp4q9 updated: 0 ports\nI0917 07:23:42.598819       1 service.go:301] Service svc-latency-9976/latency-svc-lpkxz updated: 0 ports\nI0917 07:23:42.611422       1 service.go:301] Service svc-latency-9976/latency-svc-lqcm2 updated: 0 ports\nI0917 07:23:42.619763       1 service.go:301] Service svc-latency-9976/latency-svc-lr66z updated: 0 ports\nI0917 07:23:42.632150       1 service.go:301] Service svc-latency-9976/latency-svc-ltczx updated: 0 ports\nI0917 07:23:42.641470       1 service.go:301] Service svc-latency-9976/latency-svc-lw9dg updated: 0 ports\nI0917 07:23:42.650855       1 service.go:301] Service svc-latency-9976/latency-svc-m7prd updated: 0 ports\nI0917 07:23:42.672390       1 service.go:301] Service svc-latency-9976/latency-svc-m8q9l updated: 0 ports\nI0917 07:23:42.687986       1 service.go:301] Service svc-latency-9976/latency-svc-mkjhq updated: 0 ports\nI0917 07:23:42.698843       1 service.go:301] Service svc-latency-9976/latency-svc-n59fg updated: 0 ports\nI0917 07:23:42.714630       1 service.go:301] Service svc-latency-9976/latency-svc-n79dz updated: 0 ports\nI0917 07:23:42.738965       1 service.go:301] Service svc-latency-9976/latency-svc-ndbmj updated: 0 ports\nI0917 07:23:42.753522       1 service.go:301] Service svc-latency-9976/latency-svc-ndwmx updated: 0 ports\nI0917 07:23:42.776906       1 service.go:301] Service svc-latency-9976/latency-svc-ngqb8 updated: 0 ports\nI0917 07:23:42.785003       1 service.go:301] Service svc-latency-9976/latency-svc-nkt8k updated: 0 ports\nI0917 07:23:42.796666       1 service.go:301] Service svc-latency-9976/latency-svc-nn869 updated: 0 ports\nI0917 07:23:42.806606       1 service.go:301] Service svc-latency-9976/latency-svc-np9x6 updated: 0 ports\nI0917 07:23:42.814835       1 service.go:301] Service svc-latency-9976/latency-svc-nqrtn updated: 0 ports\nI0917 07:23:42.823773       1 service.go:301] Service svc-latency-9976/latency-svc-ntbz7 updated: 0 ports\nI0917 07:23:42.833854       1 service.go:301] Service svc-latency-9976/latency-svc-pfdw5 updated: 0 ports\nI0917 07:23:42.842440       1 service.go:301] Service svc-latency-9976/latency-svc-ppvx4 updated: 0 ports\nI0917 07:23:42.850096       1 service.go:301] Service svc-latency-9976/latency-svc-pv5m5 updated: 0 ports\nI0917 07:23:42.863150       1 service.go:301] Service svc-latency-9976/latency-svc-pwws8 updated: 0 ports\nI0917 07:23:42.873199       1 service.go:301] Service svc-latency-9976/latency-svc-q95rk updated: 0 ports\nI0917 07:23:42.906811       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ngqb8\"\nI0917 07:23:42.907084       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-dvvcr\"\nI0917 07:23:42.907215       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-j7m8c\"\nI0917 07:23:42.907317       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-kptlp\"\nI0917 07:23:42.907421       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-lm2h6\"\nI0917 07:23:42.907510       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-lr66z\"\nI0917 07:23:42.907605       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ltczx\"\nI0917 07:23:42.907692       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-f6m9h\"\nI0917 07:23:42.907780       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-hlgbl\"\nI0917 07:23:42.907867       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-hzfp4\"\nI0917 07:23:42.907945       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-jhl9t\"\nI0917 07:23:42.907971       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ppvx4\"\nI0917 07:23:42.907987       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-pwws8\"\nI0917 07:23:42.907997       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-q95rk\"\nI0917 07:23:42.908007       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-cjjhl\"\nI0917 07:23:42.908019       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-fbqcz\"\nI0917 07:23:42.908030       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-gqtcg\"\nI0917 07:23:42.908041       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-hxmgd\"\nI0917 07:23:42.908050       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-j2hbk\"\nI0917 07:23:42.908059       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-mkjhq\"\nI0917 07:23:42.908068       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-nqrtn\"\nI0917 07:23:42.908079       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-cxq2c\"\nI0917 07:23:42.908090       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-dmgl8\"\nI0917 07:23:42.908100       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-k729v\"\nI0917 07:23:42.908110       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-kwf2z\"\nI0917 07:23:42.908119       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-l5qj6\"\nI0917 07:23:42.908128       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-nkt8k\"\nI0917 07:23:42.908137       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-fmbx4\"\nI0917 07:23:42.908146       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-lmft6\"\nI0917 07:23:42.908157       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-lqcm2\"\nI0917 07:23:42.908168       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-fhg5k\"\nI0917 07:23:42.908177       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-j4pgt\"\nI0917 07:23:42.908191       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-kbs8h\"\nI0917 07:23:42.908199       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-bxg5r\"\nI0917 07:23:42.908209       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-hdnv4\"\nI0917 07:23:42.908217       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-hptqz\"\nI0917 07:23:42.908226       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-m7prd\"\nI0917 07:23:42.908238       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-j57gx\"\nI0917 07:23:42.908250       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-lpkxz\"\nI0917 07:23:42.908259       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ldt27\"\nI0917 07:23:42.908267       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-m8q9l\"\nI0917 07:23:42.908275       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-brs9c\"\nI0917 07:23:42.908282       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-c5nc5\"\nI0917 07:23:42.908290       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-frjvf\"\nI0917 07:23:42.908298       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-g7nxg\"\nI0917 07:23:42.908307       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-jh54j\"\nI0917 07:23:42.908317       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-k2fgz\"\nI0917 07:23:42.908328       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-clhzz\"\nI0917 07:23:42.908336       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ddvf6\"\nI0917 07:23:42.908343       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-hkp6l\"\nI0917 07:23:42.908352       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ndbmj\"\nI0917 07:23:42.908359       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-jfj24\"\nI0917 07:23:42.908367       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-kd7bm\"\nI0917 07:23:42.908375       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-lw9dg\"\nI0917 07:23:42.908383       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-pfdw5\"\nI0917 07:23:42.908391       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-gmvhj\"\nI0917 07:23:42.908403       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ndwmx\"\nI0917 07:23:42.908412       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-nn869\"\nI0917 07:23:42.908422       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-cjztx\"\nI0917 07:23:42.908430       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ck249\"\nI0917 07:23:42.908438       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-gbt5v\"\nI0917 07:23:42.908448       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-l2zd6\"\nI0917 07:23:42.908455       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-np9x6\"\nI0917 07:23:42.908463       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-ntbz7\"\nI0917 07:23:42.908471       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-dnqxd\"\nI0917 07:23:42.908479       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-g2w7h\"\nI0917 07:23:42.908487       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-hwbj7\"\nI0917 07:23:42.908496       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-k2bcg\"\nI0917 07:23:42.908504       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-n59fg\"\nI0917 07:23:42.908515       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-g8dmn\"\nI0917 07:23:42.908526       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-gvb8z\"\nI0917 07:23:42.908537       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-kchmk\"\nI0917 07:23:42.908544       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-lp4q9\"\nI0917 07:23:42.908553       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-n79dz\"\nI0917 07:23:42.908561       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-f5pph\"\nI0917 07:23:42.908568       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-g8rvc\"\nI0917 07:23:42.908576       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-h7rls\"\nI0917 07:23:42.908583       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-jwx6q\"\nI0917 07:23:42.908591       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-pv5m5\"\nI0917 07:23:42.908828       1 proxier.go:845] \"Syncing iptables rules\"\nI0917 07:23:42.916571       1 service.go:301] Service svc-latency-9976/latency-svc-qh62l updated: 0 ports\nI0917 07:23:42.933718       1 service.go:301] Service svc-latency-9976/latency-svc-qj4zf updated: 0 ports\nI0917 07:23:42.943978       1 service.go:301] Service svc-latency-9976/latency-svc-qmxtg updated: 0 ports\nI0917 07:23:42.959832       1 service.go:301] Service svc-latency-9976/latency-svc-qntf7 updated: 0 ports\nI0917 07:23:42.982471       1 service.go:301] Service svc-latency-9976/latency-svc-qq8rc updated: 0 ports\nI0917 07:23:43.013167       1 service.go:301] Service svc-latency-9976/latency-svc-qzz4p updated: 0 ports\nI0917 07:23:43.013994       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"107.181304ms\"\nI0917 07:23:43.026643       1 service.go:301] Service svc-latency-9976/latency-svc-r8m9z updated: 0 ports\nI0917 07:23:43.042586       1 service.go:301] Service svc-latency-9976/latency-svc-r8mnb updated: 0 ports\nI0917 07:23:43.076727       1 service.go:301] Service svc-latency-9976/latency-svc-rbqcf updated: 0 ports\nI0917 07:23:43.120180       1 service.go:301] Service svc-latency-9976/latency-svc-rfpnf updated: 0 ports\nI0917 07:23:43.129151       1 service.go:301] Service svc-latency-9976/latency-svc-rs2rd updated: 0 ports\nI0917 07:23:43.136037       1 service.go:301] Service svc-latency-9976/latency-svc-rsps2 updated: 0 ports\nI0917 07:23:43.146560       1 service.go:301] Service svc-latency-9976/latency-svc-rxnmg updated: 0 ports\nI0917 07:23:43.158990       1 service.go:301] Service svc-latency-9976/latency-svc-sh7tf updated: 0 ports\nI0917 07:23:43.196690       1 service.go:301] Service svc-latency-9976/latency-svc-sr4rs updated: 0 ports\nI0917 07:23:43.251053       1 service.go:301] Service svc-latency-9976/latency-svc-sssx4 updated: 0 ports\nI0917 07:23:43.278519       1 service.go:301] Service svc-latency-9976/latency-svc-stql2 updated: 0 ports\nI0917 07:23:43.307200       1 service.go:301] Service svc-latency-9976/latency-svc-svx28 updated: 0 ports\nI0917 07:23:43.335593       1 service.go:301] Service svc-latency-9976/latency-svc-t78qm updated: 0 ports\nI0917 07:23:43.351174       1 service.go:301] Service svc-latency-9976/latency-svc-t8kcr updated: 0 ports\nI0917 07:23:43.366367       1 service.go:301] Service svc-latency-9976/latency-svc-thrlx updated: 0 ports\nI0917 07:23:43.374929       1 service.go:301] Service svc-latency-9976/latency-svc-tj9jr updated: 0 ports\nI0917 07:23:43.382353       1 service.go:301] Service svc-latency-9976/latency-svc-tmdv9 updated: 0 ports\nI0917 07:23:43.394204       1 service.go:301] Service svc-latency-9976/latency-svc-tp6wx updated: 0 ports\nI0917 07:23:43.406389       1 service.go:301] Service svc-latency-9976/latency-svc-tpb4d updated: 0 ports\nI0917 07:23:43.413775       1 service.go:301] Service svc-latency-9976/latency-svc-tqcsg updated: 0 ports\nI0917 07:23:43.428673       1 service.go:301] Service svc-latency-9976/latency-svc-trqrz updated: 0 ports\nI0917 07:23:43.435616       1 service.go:301] Service svc-latency-9976/latency-svc-trv2m updated: 0 ports\nI0917 07:23:43.445202       1 service.go:301] Service svc-latency-9976/latency-svc-tvkfd updated: 0 ports\nI0917 07:23:43.459205       1 service.go:301] Service svc-latency-9976/latency-svc-v2l8t updated: 0 ports\nI0917 07:23:43.466325       1 service.go:301] Service svc-latency-9976/latency-svc-v5g79 updated: 0 ports\nI0917 07:23:43.474368       1 service.go:301] Service svc-latency-9976/latency-svc-v7g9z updated: 0 ports\nI0917 07:23:43.485454       1 service.go:301] Service svc-latency-9976/latency-svc-v8d9b updated: 0 ports\nI0917 07:23:43.494511       1 service.go:301] Service svc-latency-9976/latency-svc-vc7pb updated: 0 ports\nI0917 07:23:43.512295       1 service.go:301] Service svc-latency-9976/latency-svc-vn92t updated: 0 ports\nI0917 07:23:43.536140       1 service.go:301] Service svc-latency-9976/latency-svc-vt7cc updated: 0 ports\nI0917 07:23:43.550616       1 service.go:301] Service svc-latency-9976/latency-svc-vzbss updated: 0 ports\nI0917 07:23:43.563718       1 service.go:301] Service svc-latency-9976/latency-svc-w44xj updated: 0 ports\nI0917 07:23:43.575741       1 service.go:301] Service svc-latency-9976/latency-svc-w4sdl updated: 0 ports\nI0917 07:23:43.585896       1 service.go:301] Service svc-latency-9976/latency-svc-w5q8c updated: 0 ports\nI0917 07:23:43.592957       1 service.go:301] Service svc-latency-9976/latency-svc-w6n2n updated: 0 ports\nI0917 07:23:43.600570       1 service.go:301] Service svc-latency-9976/latency-svc-wblwm updated: 0 ports\nI0917 07:23:43.610603       1 service.go:301] Service svc-latency-9976/latency-svc-wbnss updated: 0 ports\nI0917 07:23:43.619872       1 service.go:301] Service svc-latency-9976/latency-svc-wfkxg updated: 0 ports\nI0917 07:23:43.626895       1 service.go:301] Service svc-latency-9976/latency-svc-wngh7 updated: 0 ports\nI0917 07:23:43.644890       1 service.go:301] Service svc-latency-9976/latency-svc-x4z4d updated: 0 ports\nI0917 07:23:43.657386       1 service.go:301] Service svc-latency-9976/latency-svc-xjp64 updated: 0 ports\nI0917 07:23:43.669986       1 service.go:301] Service svc-latency-9976/latency-svc-xn2ss updated: 0 ports\nI0917 07:23:43.682221       1 service.go:301] Service svc-latency-9976/latency-svc-xqjv5 updated: 0 ports\nI0917 07:23:43.698871       1 service.go:301] Service svc-latency-9976/latency-svc-xr7hp updated: 0 ports\nI0917 07:23:43.721148       1 service.go:301] Service svc-latency-9976/latency-svc-xs2h7 updated: 0 ports\nI0917 07:23:43.728839       1 service.go:301] Service svc-latency-9976/latency-svc-xx95w updated: 0 ports\nI0917 07:23:43.769838       1 service.go:301] Service svc-latency-9976/latency-svc-xz592 updated: 0 ports\nI0917 07:23:43.783643       1 service.go:301] Service svc-latency-9976/latency-svc-z242z updated: 0 ports\nI0917 07:23:43.791658       1 service.go:301] Service svc-latency-9976/latency-svc-z2fnd updated: 0 ports\nI0917 07:23:43.797360       1 service.go:301] Service svc-latency-9976/latency-svc-z646q updated: 0 ports\nI0917 07:23:43.808454       1 service.go:301] Service svc-latency-9976/latency-svc-z6lcp updated: 0 ports\nI0917 07:23:43.814735       1 service.go:301] Service svc-latency-9976/latency-svc-z8zg8 updated: 0 ports\nI0917 07:23:43.829865       1 service.go:301] Service svc-latency-9976/latency-svc-z96d7 updated: 0 ports\nI0917 07:23:43.836467       1 service.go:301] Service svc-latency-9976/latency-svc-zbzdt updated: 0 ports\nI0917 07:23:43.844719       1 service.go:301] Service svc-latency-9976/latency-svc-zc77s updated: 0 ports\nI0917 07:23:43.861232       1 service.go:301] Service svc-latency-9976/latency-svc-zdcj4 updated: 0 ports\nI0917 07:23:43.868573       1 service.go:301] Service svc-latency-9976/latency-svc-zltgc updated: 0 ports\nI0917 07:23:43.897417       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-rbqcf\"\nI0917 07:23:43.897452       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-z2fnd\"\nI0917 07:23:43.897461       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-z8zg8\"\nI0917 07:23:43.897469       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-trv2m\"\nI0917 07:23:43.897497       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-vc7pb\"\nI0917 07:23:43.897505       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-xn2ss\"\nI0917 07:23:43.897515       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-z6lcp\"\nI0917 07:23:43.897524       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-t78qm\"\nI0917 07:23:43.897538       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-tj9jr\"\nI0917 07:23:43.897546       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-tp6wx\"\nI0917 07:23:43.897552       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-v2l8t\"\nI0917 07:23:43.897560       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-v7g9z\"\nI0917 07:23:43.897567       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-w4sdl\"\nI0917 07:23:43.897574       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-qntf7\"\nI0917 07:23:43.897598       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-qq8rc\"\nI0917 07:23:43.897606       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-t8kcr\"\nI0917 07:23:43.897612       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-trqrz\"\nI0917 07:23:43.897619       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-vt7cc\"\nI0917 07:23:43.897625       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-wfkxg\"\nI0917 07:23:43.897633       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-xjp64\"\nI0917 07:23:43.897642       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-zltgc\"\nI0917 07:23:43.897651       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-qh62l\"\nI0917 07:23:43.897658       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-qj4zf\"\nI0917 07:23:43.897666       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-w44xj\"\nI0917 07:23:43.897672       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-wblwm\"\nI0917 07:23:43.897679       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-xr7hp\"\nI0917 07:23:43.897687       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-zc77s\"\nI0917 07:23:43.897694       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-r8m9z\"\nI0917 07:23:43.897702       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-rsps2\"\nI0917 07:23:43.897709       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-sr4rs\"\nI0917 07:23:43.897717       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-v8d9b\"\nI0917 07:23:43.897725       1 service.go:441] Removing service port \"svc-latency-9976/latency-svc-w6n2n\"\nI0917 07:23:43.897732       1 service.go:441] Removing service port \"svc