This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-08-02 09:10
Elapsed32m36s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 127 lines ...
I0802 09:11:16.387413    4100 up.go:43] Cleaning up any leaked resources from previous cluster
I0802 09:11:16.387449    4100 dumplogs.go:38] /logs/artifacts/659c064d-f371-11eb-9ef5-1a6369567a27/kops toolbox dump --name e2e-8608f95a98-9381a.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user core
I0802 09:11:16.403279    4119 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0802 09:11:16.403368    4119 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-8608f95a98-9381a.test-cncf-aws.k8s.io" not found
W0802 09:11:16.914273    4100 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0802 09:11:16.914317    4100 down.go:48] /logs/artifacts/659c064d-f371-11eb-9ef5-1a6369567a27/kops delete cluster --name e2e-8608f95a98-9381a.test-cncf-aws.k8s.io --yes
I0802 09:11:16.928475    4129 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0802 09:11:16.928575    4129 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-8608f95a98-9381a.test-cncf-aws.k8s.io" not found
I0802 09:11:17.414939    4100 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/08/02 09:11:17 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0802 09:11:17.423071    4100 http.go:37] curl https://ip.jsb.workers.dev
I0802 09:11:17.514654    4100 up.go:144] /logs/artifacts/659c064d-f371-11eb-9ef5-1a6369567a27/kops create cluster --name e2e-8608f95a98-9381a.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.20.9 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=075585003325/Flatcar-stable-2765.2.6-hvm --channel=alpha --networking=kubenet --container-runtime=docker --admin-access 34.71.109.67/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-southeast-2a --master-size c5.large
I0802 09:11:17.529185    4140 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0802 09:11:17.529314    4140 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0802 09:11:17.577913    4140 create_cluster.go:724] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0802 09:11:18.165636    4140 new_cluster.go:962]  Cloud Provider ID = aws
... skipping 52 lines ...

I0802 09:11:47.420580    4100 up.go:181] /logs/artifacts/659c064d-f371-11eb-9ef5-1a6369567a27/kops validate cluster --name e2e-8608f95a98-9381a.test-cncf-aws.k8s.io --count 10 --wait 20m0s
I0802 09:11:47.436005    4162 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0802 09:11:47.436113    4162 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-8608f95a98-9381a.test-cncf-aws.k8s.io

W0802 09:11:49.378229    4162 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:11:59.407538    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:12:09.453128    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:12:19.483007    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:12:29.526350    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:12:39.818043    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:12:49.863395    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:12:59.894193    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:13:09.954093    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:13:20.004394    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:13:30.042280    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:13:40.090534    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:13:50.196422    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:14:00.236327    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:14:10.266733    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:14:20.305773    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:14:30.338977    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:14:40.372129    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:14:50.438820    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:15:00.469657    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
W0802 09:15:10.498831    4162 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0802 09:15:20.527574    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

... skipping 7 lines ...
Machine	i-018f26259876b0424				machine "i-018f26259876b0424" has not yet joined cluster
Machine	i-0a44735e77bbb5a11				machine "i-0a44735e77bbb5a11" has not yet joined cluster
Machine	i-0f96665b2ac6ca911				machine "i-0f96665b2ac6ca911" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-pq8sd		system-cluster-critical pod "coredns-5489b75945-pq8sd" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-zkljt	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-zkljt" is pending

Validation Failed
W0802 09:15:34.637601    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

... skipping 7 lines ...
Machine	i-018f26259876b0424				machine "i-018f26259876b0424" has not yet joined cluster
Machine	i-0a44735e77bbb5a11				machine "i-0a44735e77bbb5a11" has not yet joined cluster
Machine	i-0f96665b2ac6ca911				machine "i-0f96665b2ac6ca911" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-pq8sd		system-cluster-critical pod "coredns-5489b75945-pq8sd" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-zkljt	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-zkljt" is pending

Validation Failed
W0802 09:15:47.212440    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

... skipping 11 lines ...
Node	ip-172-20-47-13.ap-southeast-2.compute.internal		node "ip-172-20-47-13.ap-southeast-2.compute.internal" is not ready
Node	ip-172-20-48-162.ap-southeast-2.compute.internal	node "ip-172-20-48-162.ap-southeast-2.compute.internal" is not ready
Node	ip-172-20-56-163.ap-southeast-2.compute.internal	node "ip-172-20-56-163.ap-southeast-2.compute.internal" is not ready
Pod	kube-system/coredns-5489b75945-pq8sd			system-cluster-critical pod "coredns-5489b75945-pq8sd" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-zkljt		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-zkljt" is pending

Validation Failed
W0802 09:15:59.974555    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME							MESSAGE
Node	ip-172-20-48-162.ap-southeast-2.compute.internal	node "ip-172-20-48-162.ap-southeast-2.compute.internal" is not ready
Pod	kube-system/coredns-5489b75945-pq8sd			system-cluster-critical pod "coredns-5489b75945-pq8sd" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-zkljt		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-zkljt" is pending

Validation Failed
W0802 09:16:12.637709    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

... skipping 6 lines ...
ip-172-20-56-163.ap-southeast-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5489b75945-sjr87	system-cluster-critical pod "coredns-5489b75945-sjr87" is not ready (coredns)

Validation Failed
W0802 09:16:25.243507    4162 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.medium	4	4	ap-southeast-2a

... skipping 509 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 887 lines ...
Aug  2 09:19:01.447: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug  2 09:19:01.447: INFO: stdout: "controller-manager scheduler etcd-0 etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of controller-manager
Aug  2 09:19:01.447: INFO: Running '/tmp/kubectl2207610533/kubectl --server=https://api.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7218 get componentstatuses controller-manager'
Aug  2 09:19:02.097: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug  2 09:19:02.097: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of scheduler
Aug  2 09:19:02.097: INFO: Running '/tmp/kubectl2207610533/kubectl --server=https://api.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7218 get componentstatuses scheduler'
Aug  2 09:19:02.760: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug  2 09:19:02.760: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-0
Aug  2 09:19:02.761: INFO: Running '/tmp/kubectl2207610533/kubectl --server=https://api.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7218 get componentstatuses etcd-0'
Aug  2 09:19:03.494: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug  2 09:19:03.495: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-1
Aug  2 09:19:03.495: INFO: Running '/tmp/kubectl2207610533/kubectl --server=https://api.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7218 get componentstatuses etcd-1'
Aug  2 09:19:04.188: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug  2 09:19:04.188: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:04.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7218" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl get componentstatuses
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:786
    should get componentstatuses
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:787
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:06.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-9084" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:06.557: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
Aug  2 09:18:58.919: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating secret secrets-4614/secret-test-42e892a6-135a-4e48-88a9-9674f48c1a24
STEP: Creating a pod to test consume secrets
Aug  2 09:18:59.685: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9" in namespace "secrets-4614" to be "Succeeded or Failed"
Aug  2 09:18:59.875: INFO: Pod "pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 189.532724ms
Aug  2 09:19:02.069: INFO: Pod "pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3839413s
Aug  2 09:19:04.260: INFO: Pod "pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574413869s
Aug  2 09:19:06.450: INFO: Pod "pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.764876833s
STEP: Saw pod success
Aug  2 09:19:06.450: INFO: Pod "pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9" satisfied condition "Succeeded or Failed"
Aug  2 09:19:06.640: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9 container env-test: <nil>
STEP: delete the pod
Aug  2 09:19:07.043: INFO: Waiting for pod pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9 to disappear
Aug  2 09:19:07.233: INFO: Pod pod-configmaps-eb752b6f-8d0b-4b0f-9bdd-c06b70700ae9 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:9.459 seconds]
[sig-api-machinery] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:07.632: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context
Aug  2 09:18:59.267: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Aug  2 09:18:59.839: INFO: Waiting up to 5m0s for pod "security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995" in namespace "security-context-6527" to be "Succeeded or Failed"
Aug  2 09:19:00.030: INFO: Pod "security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995": Phase="Pending", Reason="", readiness=false. Elapsed: 190.255772ms
Aug  2 09:19:02.221: INFO: Pod "security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381529738s
Aug  2 09:19:04.412: INFO: Pod "security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572828796s
Aug  2 09:19:06.603: INFO: Pod "security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.763993239s
STEP: Saw pod success
Aug  2 09:19:06.604: INFO: Pod "security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995" satisfied condition "Succeeded or Failed"
Aug  2 09:19:06.793: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995 container test-container: <nil>
STEP: delete the pod
Aug  2 09:19:07.195: INFO: Waiting for pod security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995 to disappear
Aug  2 09:19:07.385: INFO: Pod security-context-4f52ae9a-15fb-4f44-beb1-1d5f5547e995 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:9.451 seconds]
[k8s.io] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:07.964: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 52 lines ...
• [SLOW TEST:10.532 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should run through the lifecycle of Pods and PodStatus [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:09.084: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-c0e48051-49f0-49eb-8366-e3c2c773b2bd
STEP: Creating a pod to test consume configMaps
Aug  2 09:19:10.451: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae683b38-5443-4ff8-a4f2-4d25642bdfdb" in namespace "configmap-3770" to be "Succeeded or Failed"
Aug  2 09:19:10.643: INFO: Pod "pod-configmaps-ae683b38-5443-4ff8-a4f2-4d25642bdfdb": Phase="Pending", Reason="", readiness=false. Elapsed: 192.193987ms
Aug  2 09:19:12.837: INFO: Pod "pod-configmaps-ae683b38-5443-4ff8-a4f2-4d25642bdfdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.386270159s
STEP: Saw pod success
Aug  2 09:19:12.837: INFO: Pod "pod-configmaps-ae683b38-5443-4ff8-a4f2-4d25642bdfdb" satisfied condition "Succeeded or Failed"
Aug  2 09:19:13.029: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-configmaps-ae683b38-5443-4ff8-a4f2-4d25642bdfdb container agnhost-container: <nil>
STEP: delete the pod
Aug  2 09:19:13.434: INFO: Waiting for pod pod-configmaps-ae683b38-5443-4ff8-a4f2-4d25642bdfdb to disappear
Aug  2 09:19:13.626: INFO: Pod pod-configmaps-ae683b38-5443-4ff8-a4f2-4d25642bdfdb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:13.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3770" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:14.022: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 38 lines ...
STEP: Building a namespace api object, basename emptydir
Aug  2 09:18:59.325: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug  2 09:18:59.892: INFO: Waiting up to 5m0s for pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb" in namespace "emptydir-340" to be "Succeeded or Failed"
Aug  2 09:19:00.080: INFO: Pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb": Phase="Pending", Reason="", readiness=false. Elapsed: 188.448608ms
Aug  2 09:19:02.276: INFO: Pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383936134s
Aug  2 09:19:04.465: INFO: Pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573306849s
Aug  2 09:19:06.654: INFO: Pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762212014s
Aug  2 09:19:08.843: INFO: Pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.951053287s
Aug  2 09:19:11.033: INFO: Pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.141184028s
Aug  2 09:19:13.222: INFO: Pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.330253607s
STEP: Saw pod success
Aug  2 09:19:13.222: INFO: Pod "pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb" satisfied condition "Succeeded or Failed"
Aug  2 09:19:13.411: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb container test-container: <nil>
STEP: delete the pod
Aug  2 09:19:13.797: INFO: Waiting for pod pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb to disappear
Aug  2 09:19:13.987: INFO: Pod pod-65338b3c-42ba-46e9-96ef-3c44e5669ecb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:15.996 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:14.564: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 19 lines ...
STEP: Building a namespace api object, basename emptydir
Aug  2 09:18:59.330: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug  2 09:18:59.900: INFO: Waiting up to 5m0s for pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6" in namespace "emptydir-3607" to be "Succeeded or Failed"
Aug  2 09:19:00.089: INFO: Pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 188.680689ms
Aug  2 09:19:02.283: INFO: Pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382325294s
Aug  2 09:19:04.472: INFO: Pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571395958s
Aug  2 09:19:06.661: INFO: Pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760318683s
Aug  2 09:19:08.850: INFO: Pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949332947s
Aug  2 09:19:11.043: INFO: Pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.142043744s
Aug  2 09:19:13.232: INFO: Pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.331311108s
STEP: Saw pod success
Aug  2 09:19:13.232: INFO: Pod "pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6" satisfied condition "Succeeded or Failed"
Aug  2 09:19:13.421: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6 container test-container: <nil>
STEP: delete the pod
Aug  2 09:19:13.806: INFO: Waiting for pod pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6 to disappear
Aug  2 09:19:13.995: INFO: Pod pod-99ef16ff-2cc5-4c7d-b1ed-69b3c7d81fa6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:19:14.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:16.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1359" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:17.090: INFO: Only supported for providers [openstack] (not aws)
... skipping 74 lines ...
• [SLOW TEST:10.061 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:18.046: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:19:17.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 64 lines ...
• [SLOW TEST:20.744 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:132
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:19.351: INFO: Only supported for providers [azure] (not aws)
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:988
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1033
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":1,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:23.086 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:19:18.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  2 09:19:19.739: INFO: Waiting up to 5m0s for pod "pod-b479840f-fa53-4ea1-a1eb-9ef52e072327" in namespace "emptydir-8396" to be "Succeeded or Failed"
Aug  2 09:19:19.928: INFO: Pod "pod-b479840f-fa53-4ea1-a1eb-9ef52e072327": Phase="Pending", Reason="", readiness=false. Elapsed: 189.048192ms
Aug  2 09:19:22.117: INFO: Pod "pod-b479840f-fa53-4ea1-a1eb-9ef52e072327": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378607306s
STEP: Saw pod success
Aug  2 09:19:22.118: INFO: Pod "pod-b479840f-fa53-4ea1-a1eb-9ef52e072327" satisfied condition "Succeeded or Failed"
Aug  2 09:19:22.307: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-b479840f-fa53-4ea1-a1eb-9ef52e072327 container test-container: <nil>
STEP: delete the pod
Aug  2 09:19:22.691: INFO: Waiting for pod pod-b479840f-fa53-4ea1-a1eb-9ef52e072327 to disappear
Aug  2 09:19:22.881: INFO: Pod pod-b479840f-fa53-4ea1-a1eb-9ef52e072327 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:22.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8396" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:23.280: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Aug  2 09:19:22.850: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-584" to be "Succeeded or Failed"
Aug  2 09:19:23.038: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 188.820719ms
Aug  2 09:19:25.228: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378031636s
STEP: Saw pod success
Aug  2 09:19:25.228: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug  2 09:19:25.417: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Aug  2 09:19:25.802: INFO: Waiting for pod pod-host-path-test to disappear
Aug  2 09:19:25.990: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:25.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-584" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:26.378: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 42 lines ...
• [SLOW TEST:8.894 seconds]
[k8s.io] [sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:27.019: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 113 lines ...
• [SLOW TEST:28.688 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:27.368: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:28.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2504" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:29.067: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 47 lines ...
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:19:29.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Aug  2 09:19:30.243: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:ap-southeast-2a]
Aug  2 09:19:30.243: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Aug  2 09:19:30.243: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 102 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 16 lines ...
Aug  2 09:18:59.708: INFO: Creating resource for dynamic PV
Aug  2 09:18:59.708: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-5743-aws-sc9nsjs
STEP: creating a claim
STEP: Expanding non-expandable pvc
Aug  2 09:19:00.300: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  2 09:19:00.682: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:03.071: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:05.063: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:07.064: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:09.062: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:11.063: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:13.062: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:15.068: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:17.070: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:19.073: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:21.065: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:23.063: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:25.063: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:27.063: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:29.063: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:31.092: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5743-aws-sc9nsjs",
  	... // 2 identical fields
  }

Aug  2 09:19:31.475: INFO: Error updating pvc awsvj967: PersistentVolumeClaim "awsvj967" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:32.631: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 71 lines ...
• [SLOW TEST:11.630 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:33.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-173" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set","total":-1,"completed":4,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:10.417 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:37.513: INFO: Only supported for providers [gce gke] (not aws)
... skipping 110 lines ...
• [SLOW TEST:39.927 seconds]
[k8s.io] [sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:38.550: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 31 lines ...
Aug  2 09:19:32.846: INFO: Creating a PV followed by a PVC
Aug  2 09:19:33.228: INFO: Waiting for PV local-pv5sgkb to bind to PVC pvc-mzgxj
Aug  2 09:19:33.228: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-mzgxj] to have phase Bound
Aug  2 09:19:33.418: INFO: PersistentVolumeClaim pvc-mzgxj found and phase=Bound (189.706736ms)
Aug  2 09:19:33.418: INFO: Waiting up to 3m0s for PersistentVolume local-pv5sgkb to have phase Bound
Aug  2 09:19:33.608: INFO: PersistentVolume local-pv5sgkb found and phase=Bound (189.766463ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:361
STEP: local-volume-type: dir
STEP: Initializing test volumes
Aug  2 09:19:33.987: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cabcb226-a801-4639-a4e9-e4624ef7288c] Namespace:persistent-local-volumes-test-2402 PodName:hostexec-ip-172-20-35-97.ap-southeast-2.compute.internal-czznz ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Aug  2 09:19:33.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:31.614 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:339
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:361
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:39.277: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 96 lines ...
• [SLOW TEST:43.925 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:42.609: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-bk8c
STEP: Creating a pod to test atomic-volume-subpath
Aug  2 09:19:20.908: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bk8c" in namespace "subpath-2243" to be "Succeeded or Failed"
Aug  2 09:19:21.098: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Pending", Reason="", readiness=false. Elapsed: 190.042282ms
Aug  2 09:19:23.288: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 2.380317814s
Aug  2 09:19:25.478: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 4.570825946s
Aug  2 09:19:27.668: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 6.760740388s
Aug  2 09:19:29.859: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 8.951257718s
Aug  2 09:19:32.050: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 11.141903837s
Aug  2 09:19:34.240: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 13.33232935s
Aug  2 09:19:36.430: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 15.522460904s
Aug  2 09:19:38.620: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 17.712558567s
Aug  2 09:19:40.811: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Running", Reason="", readiness=true. Elapsed: 19.903009086s
Aug  2 09:19:43.001: INFO: Pod "pod-subpath-test-configmap-bk8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.093431485s
STEP: Saw pod success
Aug  2 09:19:43.001: INFO: Pod "pod-subpath-test-configmap-bk8c" satisfied condition "Succeeded or Failed"
Aug  2 09:19:43.191: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-subpath-test-configmap-bk8c container test-container-subpath-configmap-bk8c: <nil>
STEP: delete the pod
Aug  2 09:19:43.582: INFO: Waiting for pod pod-subpath-test-configmap-bk8c to disappear
Aug  2 09:19:43.771: INFO: Pod pod-subpath-test-configmap-bk8c no longer exists
STEP: Deleting pod pod-subpath-test-configmap-bk8c
Aug  2 09:19:43.771: INFO: Deleting pod "pod-subpath-test-configmap-bk8c" in namespace "subpath-2243"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:44.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6551" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:46.017: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 181 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:47.925: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 80 lines ...
Aug  2 09:19:19.219: INFO: PersistentVolumeClaim pvc-px2sh found but phase is Pending instead of Bound.
Aug  2 09:19:21.409: INFO: PersistentVolumeClaim pvc-px2sh found and phase=Bound (11.144544279s)
Aug  2 09:19:21.409: INFO: Waiting up to 3m0s for PersistentVolume local-9d2t9 to have phase Bound
Aug  2 09:19:21.600: INFO: PersistentVolume local-9d2t9 found and phase=Bound (190.929382ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wgct
STEP: Creating a pod to test subpath
Aug  2 09:19:22.171: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wgct" in namespace "provisioning-4828" to be "Succeeded or Failed"
Aug  2 09:19:22.365: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 193.504094ms
Aug  2 09:19:24.555: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383294494s
Aug  2 09:19:26.744: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572586354s
Aug  2 09:19:28.933: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76199518s
Aug  2 09:19:31.164: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 8.992558029s
Aug  2 09:19:33.354: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 11.182517006s
Aug  2 09:19:35.549: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 13.37703374s
Aug  2 09:19:37.739: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 15.567589622s
Aug  2 09:19:39.928: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 17.756844273s
Aug  2 09:19:42.118: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 19.946489796s
Aug  2 09:19:44.310: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Pending", Reason="", readiness=false. Elapsed: 22.138651762s
Aug  2 09:19:46.499: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.327989307s
STEP: Saw pod success
Aug  2 09:19:46.500: INFO: Pod "pod-subpath-test-preprovisionedpv-wgct" satisfied condition "Succeeded or Failed"
Aug  2 09:19:46.689: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-wgct container test-container-subpath-preprovisionedpv-wgct: <nil>
STEP: delete the pod
Aug  2 09:19:47.075: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wgct to disappear
Aug  2 09:19:47.264: INFO: Pod pod-subpath-test-preprovisionedpv-wgct no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wgct
Aug  2 09:19:47.264: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wgct" in namespace "provisioning-4828"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:50.880: INFO: Driver emptydir doesn't support ext4 -- skipping
... skipping 77 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:19:01.119: INFO: >>> kubeConfig: /root/.kube/config
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
Aug  2 09:19:48.914: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug  2 09:19:49.105: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9ndh
STEP: Creating a pod to test subpath
Aug  2 09:19:49.297: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9ndh" in namespace "provisioning-7298" to be "Succeeded or Failed"
Aug  2 09:19:49.492: INFO: Pod "pod-subpath-test-inlinevolume-9ndh": Phase="Pending", Reason="", readiness=false. Elapsed: 194.662935ms
Aug  2 09:19:51.682: INFO: Pod "pod-subpath-test-inlinevolume-9ndh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.384701294s
STEP: Saw pod success
Aug  2 09:19:51.682: INFO: Pod "pod-subpath-test-inlinevolume-9ndh" satisfied condition "Succeeded or Failed"
Aug  2 09:19:51.871: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-9ndh container test-container-subpath-inlinevolume-9ndh: <nil>
STEP: delete the pod
Aug  2 09:19:52.262: INFO: Waiting for pod pod-subpath-test-inlinevolume-9ndh to disappear
Aug  2 09:19:52.452: INFO: Pod pod-subpath-test-inlinevolume-9ndh no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9ndh
Aug  2 09:19:52.452: INFO: Deleting pod "pod-subpath-test-inlinevolume-9ndh" in namespace "provisioning-7298"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":27,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:53.244: INFO: Only supported for providers [gce gke] (not aws)
... skipping 42 lines ...
• [SLOW TEST:5.800 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:115
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status","total":-1,"completed":4,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:53.763: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 157 lines ...
Aug  2 09:19:34.858: INFO: PersistentVolumeClaim pvc-5llbg found but phase is Pending instead of Bound.
Aug  2 09:19:37.048: INFO: PersistentVolumeClaim pvc-5llbg found and phase=Bound (8.946253134s)
Aug  2 09:19:37.048: INFO: Waiting up to 3m0s for PersistentVolume local-wg9sh to have phase Bound
Aug  2 09:19:37.240: INFO: PersistentVolume local-wg9sh found and phase=Bound (192.061353ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-22pt
STEP: Creating a pod to test subpath
Aug  2 09:19:37.809: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-22pt" in namespace "provisioning-1279" to be "Succeeded or Failed"
Aug  2 09:19:37.999: INFO: Pod "pod-subpath-test-preprovisionedpv-22pt": Phase="Pending", Reason="", readiness=false. Elapsed: 189.800623ms
Aug  2 09:19:40.188: INFO: Pod "pod-subpath-test-preprovisionedpv-22pt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379267863s
Aug  2 09:19:42.378: INFO: Pod "pod-subpath-test-preprovisionedpv-22pt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568827624s
Aug  2 09:19:44.567: INFO: Pod "pod-subpath-test-preprovisionedpv-22pt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.758040892s
Aug  2 09:19:46.757: INFO: Pod "pod-subpath-test-preprovisionedpv-22pt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.947682618s
STEP: Saw pod success
Aug  2 09:19:46.757: INFO: Pod "pod-subpath-test-preprovisionedpv-22pt" satisfied condition "Succeeded or Failed"
Aug  2 09:19:46.946: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-22pt container test-container-subpath-preprovisionedpv-22pt: <nil>
STEP: delete the pod
Aug  2 09:19:47.337: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-22pt to disappear
Aug  2 09:19:47.528: INFO: Pod pod-subpath-test-preprovisionedpv-22pt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-22pt
Aug  2 09:19:47.528: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-22pt" in namespace "provisioning-1279"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:54.288: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 198 lines ...
Aug  2 09:19:39.733: INFO: PersistentVolumeClaim pvc-4jw9h found and phase=Bound (188.492694ms)
Aug  2 09:19:39.733: INFO: Waiting up to 3m0s for PersistentVolume nfs-kcfp6 to have phase Bound
Aug  2 09:19:39.922: INFO: PersistentVolume nfs-kcfp6 found and phase=Bound (188.719675ms)
STEP: Checking pod has write access to PersistentVolume
Aug  2 09:19:40.299: INFO: Creating nfs test pod
Aug  2 09:19:40.488: INFO: Pod should terminate with exitcode 0 (success)
Aug  2 09:19:40.488: INFO: Waiting up to 5m0s for pod "pvc-tester-22nqs" in namespace "pv-4898" to be "Succeeded or Failed"
Aug  2 09:19:40.677: INFO: Pod "pvc-tester-22nqs": Phase="Pending", Reason="", readiness=false. Elapsed: 188.804303ms
Aug  2 09:19:42.867: INFO: Pod "pvc-tester-22nqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378068243s
STEP: Saw pod success
Aug  2 09:19:42.867: INFO: Pod "pvc-tester-22nqs" satisfied condition "Succeeded or Failed"
Aug  2 09:19:42.867: INFO: Pod pvc-tester-22nqs succeeded 
Aug  2 09:19:42.867: INFO: Deleting pod "pvc-tester-22nqs" in namespace "pv-4898"
Aug  2 09:19:43.059: INFO: Wait up to 5m0s for pod "pvc-tester-22nqs" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Aug  2 09:19:43.250: INFO: Deleting PVC pvc-4jw9h to trigger reclamation of PV 
Aug  2 09:19:43.250: INFO: Deleting PersistentVolumeClaim "pvc-4jw9h"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":2,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:55.356: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 88 lines ...
Aug  2 09:19:48.721: INFO: Waiting for pod aws-client to disappear
Aug  2 09:19:48.910: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Aug  2 09:19:48.910: INFO: Deleting PersistentVolumeClaim "pvc-2txtx"
Aug  2 09:19:49.099: INFO: Deleting PersistentVolume "aws-8fj45"
Aug  2 09:19:49.593: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-069d46ec1c5708b81", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-069d46ec1c5708b81 is currently attached to i-0f96665b2ac6ca911
	status code: 400, request id: 8f3c3c84-0d06-49cc-99e1-b0dc8025e94f
Aug  2 09:19:55.528: INFO: Successfully deleted PD "aws://ap-southeast-2a/vol-069d46ec1c5708b81".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:55.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5403" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] crictl
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:19:58.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1834" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:19:58.524: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 44 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362
Aug  2 09:19:54.394: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-2e35bd8a-4ac8-437e-bd2d-75aad343771c" in namespace "security-context-test-3237" to be "Succeeded or Failed"
Aug  2 09:19:54.584: INFO: Pod "alpine-nnp-true-2e35bd8a-4ac8-437e-bd2d-75aad343771c": Phase="Pending", Reason="", readiness=false. Elapsed: 189.532656ms
Aug  2 09:19:56.774: INFO: Pod "alpine-nnp-true-2e35bd8a-4ac8-437e-bd2d-75aad343771c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379943435s
Aug  2 09:19:58.965: INFO: Pod "alpine-nnp-true-2e35bd8a-4ac8-437e-bd2d-75aad343771c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570488401s
Aug  2 09:20:01.155: INFO: Pod "alpine-nnp-true-2e35bd8a-4ac8-437e-bd2d-75aad343771c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760673979s
Aug  2 09:20:03.345: INFO: Pod "alpine-nnp-true-2e35bd8a-4ac8-437e-bd2d-75aad343771c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.95085673s
Aug  2 09:20:03.345: INFO: Pod "alpine-nnp-true-2e35bd8a-4ac8-437e-bd2d-75aad343771c" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:03.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3237" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:03.947: INFO: Driver local doesn't support ntfs -- skipping
... skipping 104 lines ...
Aug  2 09:19:49.665: INFO: PersistentVolumeClaim pvc-d5mgm found but phase is Pending instead of Bound.
Aug  2 09:19:51.855: INFO: PersistentVolumeClaim pvc-d5mgm found and phase=Bound (13.327314495s)
Aug  2 09:19:51.855: INFO: Waiting up to 3m0s for PersistentVolume local-fhc7k to have phase Bound
Aug  2 09:19:52.044: INFO: PersistentVolume local-fhc7k found and phase=Bound (189.057762ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6dzj
STEP: Creating a pod to test subpath
Aug  2 09:19:52.614: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6dzj" in namespace "provisioning-2864" to be "Succeeded or Failed"
Aug  2 09:19:52.804: INFO: Pod "pod-subpath-test-preprovisionedpv-6dzj": Phase="Pending", Reason="", readiness=false. Elapsed: 189.311419ms
Aug  2 09:19:54.993: INFO: Pod "pod-subpath-test-preprovisionedpv-6dzj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379125881s
Aug  2 09:19:57.183: INFO: Pod "pod-subpath-test-preprovisionedpv-6dzj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569056373s
Aug  2 09:19:59.374: INFO: Pod "pod-subpath-test-preprovisionedpv-6dzj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.759414738s
Aug  2 09:20:01.564: INFO: Pod "pod-subpath-test-preprovisionedpv-6dzj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.94934875s
STEP: Saw pod success
Aug  2 09:20:01.564: INFO: Pod "pod-subpath-test-preprovisionedpv-6dzj" satisfied condition "Succeeded or Failed"
Aug  2 09:20:01.755: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-6dzj container test-container-subpath-preprovisionedpv-6dzj: <nil>
STEP: delete the pod
Aug  2 09:20:02.147: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6dzj to disappear
Aug  2 09:20:02.337: INFO: Pod pod-subpath-test-preprovisionedpv-6dzj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6dzj
Aug  2 09:20:02.337: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6dzj" in namespace "provisioning-2864"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:04.997: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:19:56.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Aug  2 09:19:57.876: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1c82ff3e-beea-4089-8fd4-78eb0c0d17f6" in namespace "security-context-test-2261" to be "Succeeded or Failed"
Aug  2 09:19:58.065: INFO: Pod "alpine-nnp-false-1c82ff3e-beea-4089-8fd4-78eb0c0d17f6": Phase="Pending", Reason="", readiness=false. Elapsed: 188.656691ms
Aug  2 09:20:00.254: INFO: Pod "alpine-nnp-false-1c82ff3e-beea-4089-8fd4-78eb0c0d17f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377845763s
Aug  2 09:20:02.444: INFO: Pod "alpine-nnp-false-1c82ff3e-beea-4089-8fd4-78eb0c0d17f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567947004s
Aug  2 09:20:04.633: INFO: Pod "alpine-nnp-false-1c82ff3e-beea-4089-8fd4-78eb0c0d17f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.756999962s
Aug  2 09:20:04.633: INFO: Pod "alpine-nnp-false-1c82ff3e-beea-4089-8fd4-78eb0c0d17f6" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:04.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2261" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:05.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6274" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:05.764: INFO: Only supported for providers [azure] (not aws)
... skipping 37 lines ...
Aug  2 09:19:50.368: INFO: PersistentVolumeClaim pvc-9s5wh found but phase is Pending instead of Bound.
Aug  2 09:19:52.559: INFO: PersistentVolumeClaim pvc-9s5wh found and phase=Bound (4.570429985s)
Aug  2 09:19:52.559: INFO: Waiting up to 3m0s for PersistentVolume local-c8xh2 to have phase Bound
Aug  2 09:19:52.749: INFO: PersistentVolume local-c8xh2 found and phase=Bound (190.02923ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jwjf
STEP: Creating a pod to test subpath
Aug  2 09:19:53.321: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jwjf" in namespace "provisioning-8638" to be "Succeeded or Failed"
Aug  2 09:19:53.511: INFO: Pod "pod-subpath-test-preprovisionedpv-jwjf": Phase="Pending", Reason="", readiness=false. Elapsed: 189.795612ms
Aug  2 09:19:55.702: INFO: Pod "pod-subpath-test-preprovisionedpv-jwjf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380535417s
Aug  2 09:19:57.893: INFO: Pod "pod-subpath-test-preprovisionedpv-jwjf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571044646s
Aug  2 09:20:00.083: INFO: Pod "pod-subpath-test-preprovisionedpv-jwjf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761759324s
Aug  2 09:20:02.274: INFO: Pod "pod-subpath-test-preprovisionedpv-jwjf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.952431462s
STEP: Saw pod success
Aug  2 09:20:02.274: INFO: Pod "pod-subpath-test-preprovisionedpv-jwjf" satisfied condition "Succeeded or Failed"
Aug  2 09:20:02.464: INFO: Trying to get logs from node ip-172-20-47-13.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-jwjf container test-container-subpath-preprovisionedpv-jwjf: <nil>
STEP: delete the pod
Aug  2 09:20:02.860: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jwjf to disappear
Aug  2 09:20:03.050: INFO: Pod pod-subpath-test-preprovisionedpv-jwjf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jwjf
Aug  2 09:20:03.050: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jwjf" in namespace "provisioning-8638"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":24,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:05.868: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 41 lines ...
Aug  2 09:19:49.192: INFO: PersistentVolumeClaim pvc-44jmz found but phase is Pending instead of Bound.
Aug  2 09:19:51.382: INFO: PersistentVolumeClaim pvc-44jmz found and phase=Bound (13.355734246s)
Aug  2 09:19:51.382: INFO: Waiting up to 3m0s for PersistentVolume local-4d8nl to have phase Bound
Aug  2 09:19:51.572: INFO: PersistentVolume local-4d8nl found and phase=Bound (189.676617ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bhdz
STEP: Creating a pod to test subpath
Aug  2 09:19:52.143: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bhdz" in namespace "provisioning-87" to be "Succeeded or Failed"
Aug  2 09:19:52.335: INFO: Pod "pod-subpath-test-preprovisionedpv-bhdz": Phase="Pending", Reason="", readiness=false. Elapsed: 192.462269ms
Aug  2 09:19:54.526: INFO: Pod "pod-subpath-test-preprovisionedpv-bhdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382843299s
Aug  2 09:19:56.716: INFO: Pod "pod-subpath-test-preprovisionedpv-bhdz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573022302s
Aug  2 09:19:58.906: INFO: Pod "pod-subpath-test-preprovisionedpv-bhdz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76341666s
Aug  2 09:20:01.096: INFO: Pod "pod-subpath-test-preprovisionedpv-bhdz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953422713s
Aug  2 09:20:03.287: INFO: Pod "pod-subpath-test-preprovisionedpv-bhdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.143920857s
STEP: Saw pod success
Aug  2 09:20:03.287: INFO: Pod "pod-subpath-test-preprovisionedpv-bhdz" satisfied condition "Succeeded or Failed"
Aug  2 09:20:03.479: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-bhdz container test-container-subpath-preprovisionedpv-bhdz: <nil>
STEP: delete the pod
Aug  2 09:20:03.870: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bhdz to disappear
Aug  2 09:20:04.059: INFO: Pod pod-subpath-test-preprovisionedpv-bhdz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bhdz
Aug  2 09:20:04.059: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bhdz" in namespace "provisioning-87"
... skipping 49 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-095542dd-cab6-47d0-863a-0f1e43d612b3
STEP: Creating a pod to test consume configMaps
Aug  2 09:20:06.531: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f5afceb-24f7-4e1b-83a5-041ebdd1fb79" in namespace "projected-7267" to be "Succeeded or Failed"
Aug  2 09:20:06.721: INFO: Pod "pod-projected-configmaps-1f5afceb-24f7-4e1b-83a5-041ebdd1fb79": Phase="Pending", Reason="", readiness=false. Elapsed: 189.788511ms
Aug  2 09:20:08.911: INFO: Pod "pod-projected-configmaps-1f5afceb-24f7-4e1b-83a5-041ebdd1fb79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.380102998s
STEP: Saw pod success
Aug  2 09:20:08.911: INFO: Pod "pod-projected-configmaps-1f5afceb-24f7-4e1b-83a5-041ebdd1fb79" satisfied condition "Succeeded or Failed"
Aug  2 09:20:09.101: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-projected-configmaps-1f5afceb-24f7-4e1b-83a5-041ebdd1fb79 container agnhost-container: <nil>
STEP: delete the pod
Aug  2 09:20:09.492: INFO: Waiting for pod pod-projected-configmaps-1f5afceb-24f7-4e1b-83a5-041ebdd1fb79 to disappear
Aug  2 09:20:09.682: INFO: Pod pod-projected-configmaps-1f5afceb-24f7-4e1b-83a5-041ebdd1fb79 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:09.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7267" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:10.081: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":37,"failed":0}
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:19:54.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
• [SLOW TEST:20.027 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:10.952: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":14,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:20:06.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
• [SLOW TEST:6.785 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:13.470: INFO: Driver hostPath doesn't support ntfs -- skipping
... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:14.916: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 284 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:16.053: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 23 lines ...
Aug  2 09:20:11.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  2 09:20:12.783: INFO: Waiting up to 5m0s for pod "pod-bf98066a-582f-4755-9269-a71392d9012e" in namespace "emptydir-991" to be "Succeeded or Failed"
Aug  2 09:20:12.974: INFO: Pod "pod-bf98066a-582f-4755-9269-a71392d9012e": Phase="Pending", Reason="", readiness=false. Elapsed: 190.928305ms
Aug  2 09:20:15.165: INFO: Pod "pod-bf98066a-582f-4755-9269-a71392d9012e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.381740615s
STEP: Saw pod success
Aug  2 09:20:15.165: INFO: Pod "pod-bf98066a-582f-4755-9269-a71392d9012e" satisfied condition "Succeeded or Failed"
Aug  2 09:20:15.357: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-bf98066a-582f-4755-9269-a71392d9012e container test-container: <nil>
STEP: delete the pod
Aug  2 09:20:15.746: INFO: Waiting for pod pod-bf98066a-582f-4755-9269-a71392d9012e to disappear
Aug  2 09:20:15.938: INFO: Pod pod-bf98066a-582f-4755-9269-a71392d9012e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:15.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-991" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 14 lines ...
Aug  2 09:20:17.025: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.330 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 230 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:17.379: INFO: Only supported for providers [openstack] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1094
------------------------------
... skipping 84 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:833
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:20:08.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
• [SLOW TEST:9.880 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:18.149: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 92 lines ...
• [SLOW TEST:59.213 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be restarted with a docker exec liveness probe with timeout 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:216
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout ","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:22.556: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 287 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:241
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:22.742: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 41 lines ...
• [SLOW TEST:6.798 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1911
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":5,"skipped":18,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:23.179: INFO: Only supported for providers [openstack] (not aws)
... skipping 67 lines ...
Aug  2 09:20:22.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug  2 09:20:23.868: INFO: Waiting up to 5m0s for pod "pod-62a453e2-a2e7-42c2-bf05-56df1b48191f" in namespace "emptydir-551" to be "Succeeded or Failed"
Aug  2 09:20:24.058: INFO: Pod "pod-62a453e2-a2e7-42c2-bf05-56df1b48191f": Phase="Pending", Reason="", readiness=false. Elapsed: 190.06929ms
Aug  2 09:20:26.248: INFO: Pod "pod-62a453e2-a2e7-42c2-bf05-56df1b48191f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.379326062s
STEP: Saw pod success
Aug  2 09:20:26.248: INFO: Pod "pod-62a453e2-a2e7-42c2-bf05-56df1b48191f" satisfied condition "Succeeded or Failed"
Aug  2 09:20:26.437: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-62a453e2-a2e7-42c2-bf05-56df1b48191f container test-container: <nil>
STEP: delete the pod
Aug  2 09:20:26.824: INFO: Waiting for pod pod-62a453e2-a2e7-42c2-bf05-56df1b48191f to disappear
Aug  2 09:20:27.013: INFO: Pod pod-62a453e2-a2e7-42c2-bf05-56df1b48191f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:27.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-551" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":55,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:27.426: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:27.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1977" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets","total":-1,"completed":6,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:27.815: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 55 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:20:10.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:988
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:989
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":7,"skipped":37,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
Aug  2 09:20:18.124: INFO: Got stdout from 52.62.28.168:22: Hello from core@ip-172-20-56-163.ap-southeast-2.compute.internal
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Aug  2 09:20:22.666: INFO: Got stdout from 13.211.227.140:22: stdout
Aug  2 09:20:22.667: INFO: Got stderr from 13.211.227.140:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing core@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:27.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-6034" for this suite.


• [SLOW TEST:22.176 seconds]
[k8s.io] [sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":3,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:28.059: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:30.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4024" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set","total":-1,"completed":8,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Aug  2 09:20:19.046: INFO: PersistentVolumeClaim pvc-r6xq4 found but phase is Pending instead of Bound.
Aug  2 09:20:21.236: INFO: PersistentVolumeClaim pvc-r6xq4 found and phase=Bound (2.379950882s)
Aug  2 09:20:21.236: INFO: Waiting up to 3m0s for PersistentVolume local-w7vv9 to have phase Bound
Aug  2 09:20:21.426: INFO: PersistentVolume local-w7vv9 found and phase=Bound (189.961153ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nxvq
STEP: Creating a pod to test subpath
Aug  2 09:20:21.996: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nxvq" in namespace "provisioning-2890" to be "Succeeded or Failed"
Aug  2 09:20:22.186: INFO: Pod "pod-subpath-test-preprovisionedpv-nxvq": Phase="Pending", Reason="", readiness=false. Elapsed: 189.828ms
Aug  2 09:20:24.376: INFO: Pod "pod-subpath-test-preprovisionedpv-nxvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380012357s
Aug  2 09:20:26.566: INFO: Pod "pod-subpath-test-preprovisionedpv-nxvq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.570207646s
STEP: Saw pod success
Aug  2 09:20:26.567: INFO: Pod "pod-subpath-test-preprovisionedpv-nxvq" satisfied condition "Succeeded or Failed"
Aug  2 09:20:26.756: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-nxvq container test-container-subpath-preprovisionedpv-nxvq: <nil>
STEP: delete the pod
Aug  2 09:20:27.145: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nxvq to disappear
Aug  2 09:20:27.335: INFO: Pod pod-subpath-test-preprovisionedpv-nxvq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nxvq
Aug  2 09:20:27.335: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nxvq" in namespace "provisioning-2890"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:31.316: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:20:33.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9204" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace","total":-1,"completed":9,"skipped":44,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 25 lines ...
Aug  2 09:20:19.559: INFO: PersistentVolumeClaim pvc-b4lvx found but phase is Pending instead of Bound.
Aug  2 09:20:21.748: INFO: PersistentVolumeClaim pvc-b4lvx found and phase=Bound (13.323991617s)
Aug  2 09:20:21.748: INFO: Waiting up to 3m0s for PersistentVolume local-4rwlg to have phase Bound
Aug  2 09:20:21.937: INFO: PersistentVolume local-4rwlg found and phase=Bound (189.308655ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gcs5
STEP: Creating a pod to test subpath
Aug  2 09:20:22.512: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gcs5" in namespace "provisioning-7782" to be "Succeeded or Failed"
Aug  2 09:20:22.701: INFO: Pod "pod-subpath-test-preprovisionedpv-gcs5": Phase="Pending", Reason="", readiness=false. Elapsed: 188.760447ms
Aug  2 09:20:24.891: INFO: Pod "pod-subpath-test-preprovisionedpv-gcs5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379056528s
Aug  2 09:20:27.081: INFO: Pod "pod-subpath-test-preprovisionedpv-gcs5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.568376919s
STEP: Saw pod success
Aug  2 09:20:27.081: INFO: Pod "pod-subpath-test-preprovisionedpv-gcs5" satisfied condition "Succeeded or Failed"
Aug  2 09:20:27.269: INFO: Trying to get logs from node ip-172-20-47-13.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-gcs5 container test-container-subpath-preprovisionedpv-gcs5: <nil>
STEP: delete the pod
Aug  2 09:20:27.664: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gcs5 to disappear
Aug  2 09:20:27.852: INFO: Pod pod-subpath-test-preprovisionedpv-gcs5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gcs5
Aug  2 09:20:27.853: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gcs5" in namespace "provisioning-7782"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:34.505: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 162 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:37.194: INFO: Only supported for providers [azure] (not aws)
... skipping 48 lines ...
• [SLOW TEST:10.004 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:457
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":4,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 1774 lines ...
• [SLOW TEST:34.917 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:208
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":4,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:20:45.032: INFO: >>> kubeConfig: /root/.kube/config
... skipping 46 lines ...
Aug  2 09:20:35.719: INFO: PersistentVolumeClaim pvc-pmmwp found but phase is Pending instead of Bound.
Aug  2 09:20:37.912: INFO: PersistentVolumeClaim pvc-pmmwp found and phase=Bound (4.575688911s)
Aug  2 09:20:37.912: INFO: Waiting up to 3m0s for PersistentVolume local-g8cht to have phase Bound
Aug  2 09:20:38.102: INFO: PersistentVolume local-g8cht found and phase=Bound (189.571772ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-46sj
STEP: Creating a pod to test subpath
Aug  2 09:20:38.672: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-46sj" in namespace "provisioning-487" to be "Succeeded or Failed"
Aug  2 09:20:38.865: INFO: Pod "pod-subpath-test-preprovisionedpv-46sj": Phase="Pending", Reason="", readiness=false. Elapsed: 192.530895ms
Aug  2 09:20:41.055: INFO: Pod "pod-subpath-test-preprovisionedpv-46sj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382948629s
Aug  2 09:20:43.246: INFO: Pod "pod-subpath-test-preprovisionedpv-46sj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.573155852s
STEP: Saw pod success
Aug  2 09:20:43.246: INFO: Pod "pod-subpath-test-preprovisionedpv-46sj" satisfied condition "Succeeded or Failed"
Aug  2 09:20:43.436: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-46sj container test-container-volume-preprovisionedpv-46sj: <nil>
STEP: delete the pod
Aug  2 09:20:43.855: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-46sj to disappear
Aug  2 09:20:44.045: INFO: Pod pod-subpath-test-preprovisionedpv-46sj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-46sj
Aug  2 09:20:44.045: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-46sj" in namespace "provisioning-487"
... skipping 50 lines ...
Aug  2 09:20:34.035: INFO: PersistentVolumeClaim pvc-78z8w found but phase is Pending instead of Bound.
Aug  2 09:20:36.226: INFO: PersistentVolumeClaim pvc-78z8w found and phase=Bound (11.141809669s)
Aug  2 09:20:36.226: INFO: Waiting up to 3m0s for PersistentVolume local-blssl to have phase Bound
Aug  2 09:20:36.415: INFO: PersistentVolume local-blssl found and phase=Bound (189.212563ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j6gj
STEP: Creating a pod to test subpath
Aug  2 09:20:36.986: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j6gj" in namespace "provisioning-9913" to be "Succeeded or Failed"
Aug  2 09:20:37.176: INFO: Pod "pod-subpath-test-preprovisionedpv-j6gj": Phase="Pending", Reason="", readiness=false. Elapsed: 190.752336ms
Aug  2 09:20:39.370: INFO: Pod "pod-subpath-test-preprovisionedpv-j6gj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384302258s
Aug  2 09:20:41.560: INFO: Pod "pod-subpath-test-preprovisionedpv-j6gj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.574160482s
STEP: Saw pod success
Aug  2 09:20:41.560: INFO: Pod "pod-subpath-test-preprovisionedpv-j6gj" satisfied condition "Succeeded or Failed"
Aug  2 09:20:41.749: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-j6gj container test-container-subpath-preprovisionedpv-j6gj: <nil>
STEP: delete the pod
Aug  2 09:20:42.137: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j6gj to disappear
Aug  2 09:20:42.326: INFO: Pod pod-subpath-test-preprovisionedpv-j6gj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j6gj
Aug  2 09:20:42.326: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j6gj" in namespace "provisioning-9913"
STEP: Creating pod pod-subpath-test-preprovisionedpv-j6gj
STEP: Creating a pod to test subpath
Aug  2 09:20:42.705: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j6gj" in namespace "provisioning-9913" to be "Succeeded or Failed"
Aug  2 09:20:42.896: INFO: Pod "pod-subpath-test-preprovisionedpv-j6gj": Phase="Pending", Reason="", readiness=false. Elapsed: 190.441635ms
Aug  2 09:20:45.086: INFO: Pod "pod-subpath-test-preprovisionedpv-j6gj": Phase="Running", Reason="", readiness=true. Elapsed: 2.380291352s
Aug  2 09:20:47.275: INFO: Pod "pod-subpath-test-preprovisionedpv-j6gj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.569574427s
STEP: Saw pod success
Aug  2 09:20:47.275: INFO: Pod "pod-subpath-test-preprovisionedpv-j6gj" satisfied condition "Succeeded or Failed"
Aug  2 09:20:47.464: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-j6gj container test-container-subpath-preprovisionedpv-j6gj: <nil>
STEP: delete the pod
Aug  2 09:20:47.851: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j6gj to disappear
Aug  2 09:20:48.040: INFO: Pod pod-subpath-test-preprovisionedpv-j6gj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j6gj
Aug  2 09:20:48.040: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j6gj" in namespace "provisioning-9913"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:53.353: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 44 lines ...
• [SLOW TEST:76.436 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:223
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:54.025: INFO: Only supported for providers [gce gke] (not aws)
... skipping 72 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:56.338: INFO: Only supported for providers [openstack] (not aws)
... skipping 14 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1094
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":39,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:20:46.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 39 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":8,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:20:59.136: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 187 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:00.466: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 81 lines ...
Aug  2 09:20:56.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109
STEP: Creating a pod to test downward api env vars
Aug  2 09:20:57.502: INFO: Waiting up to 5m0s for pod "downward-api-9be6d553-caec-407b-828f-888f1253f94f" in namespace "downward-api-9499" to be "Succeeded or Failed"
Aug  2 09:20:57.691: INFO: Pod "downward-api-9be6d553-caec-407b-828f-888f1253f94f": Phase="Pending", Reason="", readiness=false. Elapsed: 188.637425ms
Aug  2 09:20:59.880: INFO: Pod "downward-api-9be6d553-caec-407b-828f-888f1253f94f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.377679971s
STEP: Saw pod success
Aug  2 09:20:59.880: INFO: Pod "downward-api-9be6d553-caec-407b-828f-888f1253f94f" satisfied condition "Succeeded or Failed"
Aug  2 09:21:00.069: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod downward-api-9be6d553-caec-407b-828f-888f1253f94f container dapi-container: <nil>
STEP: delete the pod
Aug  2 09:21:00.453: INFO: Waiting for pod downward-api-9be6d553-caec-407b-828f-888f1253f94f to disappear
Aug  2 09:21:00.642: INFO: Pod downward-api-9be6d553-caec-407b-828f-888f1253f94f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:21:00.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9499" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":3,"skipped":35,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:01.063: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 212 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:394
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":5,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] PrivilegedPod [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:9.929 seconds]
[k8s.io] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49
------------------------------
{"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":6,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:03.335: INFO: Only supported for providers [gce gke] (not aws)
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:21:05.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-605" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:05.420: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 148 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1410
    should modify fsGroup if fsGroupPolicy=File
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1434
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":1,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:08.281: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 76 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:48
STEP: Creating a pod to test hostPath mode
Aug  2 09:21:02.588: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9052" to be "Succeeded or Failed"
Aug  2 09:21:02.779: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 190.543499ms
Aug  2 09:21:04.969: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380480018s
Aug  2 09:21:07.158: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.569475333s
STEP: Saw pod success
Aug  2 09:21:07.158: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug  2 09:21:07.346: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Aug  2 09:21:07.733: INFO: Waiting for pod pod-host-path-test to disappear
Aug  2 09:21:07.923: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 27 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":69,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:08.317: INFO: Only supported for providers [vsphere] (not aws)
... skipping 111 lines ...
• [SLOW TEST:64.116 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should not be ready with a docker exec readiness probe timeout 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233
------------------------------
{"msg":"PASSED [k8s.io] Probing container should not be ready with a docker exec readiness probe timeout ","total":-1,"completed":4,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:09.355: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 117 lines ...
STEP: Destroying namespace "services-3551" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":5,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:09.859: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug  2 09:21:04.485: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4528597b-09c9-4092-a10d-be7bfca71d7f" in namespace "projected-1893" to be "Succeeded or Failed"
Aug  2 09:21:04.674: INFO: Pod "downwardapi-volume-4528597b-09c9-4092-a10d-be7bfca71d7f": Phase="Pending", Reason="", readiness=false. Elapsed: 189.214986ms
Aug  2 09:21:06.863: INFO: Pod "downwardapi-volume-4528597b-09c9-4092-a10d-be7bfca71d7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378849203s
Aug  2 09:21:09.053: INFO: Pod "downwardapi-volume-4528597b-09c9-4092-a10d-be7bfca71d7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.568626039s
STEP: Saw pod success
Aug  2 09:21:09.053: INFO: Pod "downwardapi-volume-4528597b-09c9-4092-a10d-be7bfca71d7f" satisfied condition "Succeeded or Failed"
Aug  2 09:21:09.243: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod downwardapi-volume-4528597b-09c9-4092-a10d-be7bfca71d7f container client-container: <nil>
STEP: delete the pod
Aug  2 09:21:09.638: INFO: Waiting for pod downwardapi-volume-4528597b-09c9-4092-a10d-be7bfca71d7f to disappear
Aug  2 09:21:09.827: INFO: Pod downwardapi-volume-4528597b-09c9-4092-a10d-be7bfca71d7f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.860 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:21:03.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-03d1c51e-1000-4991-a095-f7aecd7da258
STEP: Creating a pod to test consume secrets
Aug  2 09:21:05.104: INFO: Waiting up to 5m0s for pod "pod-secrets-7a9eb743-59b5-4c21-8f29-5c0e9f95800b" in namespace "secrets-2843" to be "Succeeded or Failed"
Aug  2 09:21:05.293: INFO: Pod "pod-secrets-7a9eb743-59b5-4c21-8f29-5c0e9f95800b": Phase="Pending", Reason="", readiness=false. Elapsed: 189.842108ms
Aug  2 09:21:07.484: INFO: Pod "pod-secrets-7a9eb743-59b5-4c21-8f29-5c0e9f95800b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380133858s
Aug  2 09:21:09.674: INFO: Pod "pod-secrets-7a9eb743-59b5-4c21-8f29-5c0e9f95800b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.570221775s
STEP: Saw pod success
Aug  2 09:21:09.674: INFO: Pod "pod-secrets-7a9eb743-59b5-4c21-8f29-5c0e9f95800b" satisfied condition "Succeeded or Failed"
Aug  2 09:21:09.865: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-secrets-7a9eb743-59b5-4c21-8f29-5c0e9f95800b container secret-volume-test: <nil>
STEP: delete the pod
Aug  2 09:21:10.254: INFO: Waiting for pod pod-secrets-7a9eb743-59b5-4c21-8f29-5c0e9f95800b to disappear
Aug  2 09:21:10.444: INFO: Pod pod-secrets-7a9eb743-59b5-4c21-8f29-5c0e9f95800b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.073 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:10.869: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 158 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:21:12.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2380" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:12.643: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 83 lines ...
• [SLOW TEST:13.692 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:466
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]","total":-1,"completed":9,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 39 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:19.747: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 24 lines ...
• [SLOW TEST:10.867 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":5,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:20.292: INFO: Only supported for providers [gce gke] (not aws)
... skipping 21 lines ...
Aug  2 09:21:20.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug  2 09:21:21.435: INFO: Waiting up to 5m0s for pod "pod-72d264ff-d40d-4f19-8f1c-f05dbdbf608b" in namespace "emptydir-2239" to be "Succeeded or Failed"
Aug  2 09:21:21.623: INFO: Pod "pod-72d264ff-d40d-4f19-8f1c-f05dbdbf608b": Phase="Pending", Reason="", readiness=false. Elapsed: 188.464931ms
Aug  2 09:21:23.812: INFO: Pod "pod-72d264ff-d40d-4f19-8f1c-f05dbdbf608b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.377503296s
STEP: Saw pod success
Aug  2 09:21:23.812: INFO: Pod "pod-72d264ff-d40d-4f19-8f1c-f05dbdbf608b" satisfied condition "Succeeded or Failed"
Aug  2 09:21:24.001: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-72d264ff-d40d-4f19-8f1c-f05dbdbf608b container test-container: <nil>
STEP: delete the pod
Aug  2 09:21:24.392: INFO: Waiting for pod pod-72d264ff-d40d-4f19-8f1c-f05dbdbf608b to disappear
Aug  2 09:21:24.592: INFO: Pod pod-72d264ff-d40d-4f19-8f1c-f05dbdbf608b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 18 lines ...
Aug  2 09:20:32.325: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  2 09:20:32.325: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  2 09:20:32.325: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-7296-aws-scvdgcp      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-7296    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7296-aws-scvdgcp,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-7296    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7296-aws-scvdgcp,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-7296-aws-scvdgcp
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Aug  2 09:20:33.092: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-572cb" in namespace "provisioning-7296" to be "Succeeded or Failed"
Aug  2 09:20:33.282: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Pending", Reason="", readiness=false. Elapsed: 189.605868ms
Aug  2 09:20:35.472: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379635007s
Aug  2 09:20:37.665: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572109079s
Aug  2 09:20:39.855: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762447277s
Aug  2 09:20:42.045: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.952538757s
Aug  2 09:20:44.236: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.143330986s
Aug  2 09:20:46.426: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.333784423s
Aug  2 09:20:48.624: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.531422518s
Aug  2 09:20:50.814: INFO: Pod "pvc-volume-tester-writer-572cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.721526932s
STEP: Saw pod success
Aug  2 09:20:50.814: INFO: Pod "pvc-volume-tester-writer-572cb" satisfied condition "Succeeded or Failed"
Aug  2 09:20:51.199: INFO: Pod pvc-volume-tester-writer-572cb has the following logs: 
Aug  2 09:20:51.199: INFO: Deleting pod "pvc-volume-tester-writer-572cb" in namespace "provisioning-7296"
Aug  2 09:20:51.393: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-572cb" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-48-162.ap-southeast-2.compute.internal"
Aug  2 09:20:52.154: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-4kf6z" in namespace "provisioning-7296" to be "Succeeded or Failed"
Aug  2 09:20:52.343: INFO: Pod "pvc-volume-tester-reader-4kf6z": Phase="Pending", Reason="", readiness=false. Elapsed: 189.747724ms
Aug  2 09:20:54.534: INFO: Pod "pvc-volume-tester-reader-4kf6z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379884744s
Aug  2 09:20:56.724: INFO: Pod "pvc-volume-tester-reader-4kf6z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.569826365s
STEP: Saw pod success
Aug  2 09:20:56.724: INFO: Pod "pvc-volume-tester-reader-4kf6z" satisfied condition "Succeeded or Failed"
Aug  2 09:20:56.921: INFO: Pod pvc-volume-tester-reader-4kf6z has the following logs: hello world

Aug  2 09:20:56.921: INFO: Deleting pod "pvc-volume-tester-reader-4kf6z" in namespace "provisioning-7296"
Aug  2 09:20:57.114: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-4kf6z" to be fully deleted
Aug  2 09:20:57.304: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-lxh8j] to have phase Bound
Aug  2 09:20:57.494: INFO: PersistentVolumeClaim pvc-lxh8j found and phase=Bound (189.667988ms)
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":5,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:24.996: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1440
------------------------------
... skipping 6 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-b144323f-2f79-4373-aebe-a1f72bb5b141
STEP: Creating a pod to test consume secrets
Aug  2 09:21:15.002: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e" in namespace "projected-870" to be "Succeeded or Failed"
Aug  2 09:21:15.191: INFO: Pod "pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e": Phase="Pending", Reason="", readiness=false. Elapsed: 189.571354ms
Aug  2 09:21:17.381: INFO: Pod "pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379565343s
Aug  2 09:21:19.572: INFO: Pod "pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570546886s
Aug  2 09:21:21.762: INFO: Pod "pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760533783s
Aug  2 09:21:23.953: INFO: Pod "pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.950994822s
STEP: Saw pod success
Aug  2 09:21:23.953: INFO: Pod "pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e" satisfied condition "Succeeded or Failed"
Aug  2 09:21:24.144: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug  2 09:21:24.531: INFO: Waiting for pod pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e to disappear
Aug  2 09:21:24.722: INFO: Pod pod-projected-secrets-7c371654-a3f7-4d64-97fc-57e6363ccf0e no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 5 lines ...
• [SLOW TEST:12.390 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":10,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:25.303: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 81 lines ...
Aug  2 09:20:49.005: INFO: PersistentVolumeClaim pvc-kvhmm found but phase is Pending instead of Bound.
Aug  2 09:20:51.194: INFO: PersistentVolumeClaim pvc-kvhmm found and phase=Bound (11.136829795s)
Aug  2 09:20:51.194: INFO: Waiting up to 3m0s for PersistentVolume local-g6zm2 to have phase Bound
Aug  2 09:20:51.383: INFO: PersistentVolume local-g6zm2 found and phase=Bound (189.253938ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-r2x9
STEP: Creating a pod to test atomic-volume-subpath
Aug  2 09:20:51.954: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-r2x9" in namespace "provisioning-1890" to be "Succeeded or Failed"
Aug  2 09:20:52.144: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Pending", Reason="", readiness=false. Elapsed: 189.513407ms
Aug  2 09:20:54.333: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378721402s
Aug  2 09:20:56.522: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567569609s
Aug  2 09:20:58.711: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Running", Reason="", readiness=true. Elapsed: 6.756578071s
Aug  2 09:21:00.900: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Running", Reason="", readiness=true. Elapsed: 8.945612659s
Aug  2 09:21:03.091: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Running", Reason="", readiness=true. Elapsed: 11.136964859s
... skipping 4 lines ...
Aug  2 09:21:14.042: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Running", Reason="", readiness=true. Elapsed: 22.088121163s
Aug  2 09:21:16.232: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Running", Reason="", readiness=true. Elapsed: 24.277487248s
Aug  2 09:21:18.421: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Running", Reason="", readiness=true. Elapsed: 26.466819594s
Aug  2 09:21:20.613: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Running", Reason="", readiness=true. Elapsed: 28.658836116s
Aug  2 09:21:22.803: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.848279464s
STEP: Saw pod success
Aug  2 09:21:22.803: INFO: Pod "pod-subpath-test-preprovisionedpv-r2x9" satisfied condition "Succeeded or Failed"
Aug  2 09:21:22.992: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-r2x9 container test-container-subpath-preprovisionedpv-r2x9: <nil>
STEP: delete the pod
Aug  2 09:21:23.390: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-r2x9 to disappear
Aug  2 09:21:23.578: INFO: Pod pod-subpath-test-preprovisionedpv-r2x9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-r2x9
Aug  2 09:21:23.579: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-r2x9" in namespace "provisioning-1890"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:26.609: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 150 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":75,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:20.004 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":8,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:30.241: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 44 lines ...
Aug  2 09:21:19.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug  2 09:21:29.620: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:30.410: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 187 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:217
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":49,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:21:01.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 136 lines ...
• [SLOW TEST:21.965 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:273
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":6,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:32.895: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 18 lines ...
Aug  2 09:21:25.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Aug  2 09:21:26.484: INFO: Waiting up to 5m0s for pod "downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf" in namespace "downward-api-7576" to be "Succeeded or Failed"
Aug  2 09:21:26.674: INFO: Pod "downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf": Phase="Pending", Reason="", readiness=false. Elapsed: 189.789165ms
Aug  2 09:21:28.864: INFO: Pod "downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380048159s
Aug  2 09:21:31.054: INFO: Pod "downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570034096s
Aug  2 09:21:33.245: INFO: Pod "downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.761223488s
STEP: Saw pod success
Aug  2 09:21:33.246: INFO: Pod "downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf" satisfied condition "Succeeded or Failed"
Aug  2 09:21:33.435: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf container dapi-container: <nil>
STEP: delete the pod
Aug  2 09:21:33.822: INFO: Waiting for pod downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf to disappear
Aug  2 09:21:34.012: INFO: Pod downward-api-3e7a5e9c-4fa0-4037-9bc8-8f689930ddaf no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:9.054 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:34.404: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug  2 09:21:31.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68764281-c11d-48d5-bf05-9616ce93d592" in namespace "downward-api-4781" to be "Succeeded or Failed"
Aug  2 09:21:31.798: INFO: Pod "downwardapi-volume-68764281-c11d-48d5-bf05-9616ce93d592": Phase="Pending", Reason="", readiness=false. Elapsed: 192.288123ms
Aug  2 09:21:33.989: INFO: Pod "downwardapi-volume-68764281-c11d-48d5-bf05-9616ce93d592": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.382930302s
STEP: Saw pod success
Aug  2 09:21:33.989: INFO: Pod "downwardapi-volume-68764281-c11d-48d5-bf05-9616ce93d592" satisfied condition "Succeeded or Failed"
Aug  2 09:21:34.180: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod downwardapi-volume-68764281-c11d-48d5-bf05-9616ce93d592 container client-container: <nil>
STEP: delete the pod
Aug  2 09:21:34.568: INFO: Waiting for pod downwardapi-volume-68764281-c11d-48d5-bf05-9616ce93d592 to disappear
Aug  2 09:21:34.759: INFO: Pod downwardapi-volume-68764281-c11d-48d5-bf05-9616ce93d592 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 9 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name projected-secret-test-237a8a1b-93d0-4c46-8f3a-d74e8726385e
STEP: Creating a pod to test consume secrets
Aug  2 09:21:28.878: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8" in namespace "projected-338" to be "Succeeded or Failed"
Aug  2 09:21:29.067: INFO: Pod "pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 188.799009ms
Aug  2 09:21:31.256: INFO: Pod "pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377684923s
Aug  2 09:21:33.446: INFO: Pod "pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567746512s
Aug  2 09:21:35.635: INFO: Pod "pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.757135852s
STEP: Saw pod success
Aug  2 09:21:35.635: INFO: Pod "pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8" satisfied condition "Succeeded or Failed"
Aug  2 09:21:35.824: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8 container secret-volume-test: <nil>
STEP: delete the pod
Aug  2 09:21:36.219: INFO: Waiting for pod pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8 to disappear
Aug  2 09:21:36.408: INFO: Pod pod-projected-secrets-0eb2f7fb-4c02-448e-932a-12e54bf33ca8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:9.235 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:36.799: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 116 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":12,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":2,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:45.482: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 125 lines ...
Aug  2 09:21:35.847: INFO: Pod aws-client still exists
Aug  2 09:21:37.657: INFO: Waiting for pod aws-client to disappear
Aug  2 09:21:37.847: INFO: Pod aws-client still exists
Aug  2 09:21:39.657: INFO: Waiting for pod aws-client to disappear
Aug  2 09:21:39.847: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Aug  2 09:21:40.787: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-08c8d56a86bc91617", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08c8d56a86bc91617 is currently attached to i-0f96665b2ac6ca911
	status code: 400, request id: deba4bb4-e21c-4f6f-821b-fac6db71334f
Aug  2 09:21:46.698: INFO: Successfully deleted PD "aws://ap-southeast-2a/vol-08c8d56a86bc91617".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:21:46.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-131" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":1,"skipped":34,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:21:11.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 121 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug  2 09:21:40.799: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc" in namespace "projected-3060" to be "Succeeded or Failed"
Aug  2 09:21:40.989: INFO: Pod "downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 189.690805ms
Aug  2 09:21:43.179: INFO: Pod "downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37984523s
Aug  2 09:21:45.369: INFO: Pod "downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570282057s
Aug  2 09:21:47.560: INFO: Pod "downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.76071677s
STEP: Saw pod success
Aug  2 09:21:47.560: INFO: Pod "downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc" satisfied condition "Succeeded or Failed"
Aug  2 09:21:47.750: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc container client-container: <nil>
STEP: delete the pod
Aug  2 09:21:48.139: INFO: Waiting for pod downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc to disappear
Aug  2 09:21:48.329: INFO: Pod downwardapi-volume-6ef18627-4588-430d-812f-6ad68d6bd3cc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:9.057 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:48.732: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 73 lines ...
Aug  2 09:20:47.579: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug  2 09:20:47.779: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathhhddz] to have phase Bound
Aug  2 09:20:47.968: INFO: PersistentVolumeClaim csi-hostpathhhddz found but phase is Pending instead of Bound.
Aug  2 09:20:50.159: INFO: PersistentVolumeClaim csi-hostpathhhddz found and phase=Bound (2.379229401s)
STEP: Expanding non-expandable pvc
Aug  2 09:20:50.536: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  2 09:20:50.918: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:20:53.297: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:20:55.296: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:20:57.296: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:20:59.298: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:01.298: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:03.298: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:05.297: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:07.297: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:09.297: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:11.297: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:13.296: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:15.297: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:17.303: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:19.300: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:21.301: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Aug  2 09:21:21.680: INFO: Error updating pvc csi-hostpathhhddz: persistentvolumeclaims "csi-hostpathhhddz" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Aug  2 09:21:21.680: INFO: Deleting PersistentVolumeClaim "csi-hostpathhhddz"
Aug  2 09:21:21.870: INFO: Waiting up to 5m0s for PersistentVolume pvc-f7ec6a82-1bf1-4745-bf4d-f74a47a0526d to get deleted
Aug  2 09:21:22.059: INFO: PersistentVolume pvc-f7ec6a82-1bf1-4745-bf4d-f74a47a0526d was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-6275
... skipping 77 lines ...
Aug  2 09:21:35.207: INFO: PersistentVolumeClaim pvc-dtx86 found but phase is Pending instead of Bound.
Aug  2 09:21:37.397: INFO: PersistentVolumeClaim pvc-dtx86 found and phase=Bound (2.379878119s)
Aug  2 09:21:37.397: INFO: Waiting up to 3m0s for PersistentVolume local-snrxb to have phase Bound
Aug  2 09:21:37.587: INFO: PersistentVolume local-snrxb found and phase=Bound (189.994381ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-w8r2
STEP: Creating a pod to test subpath
Aug  2 09:21:38.159: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w8r2" in namespace "provisioning-2194" to be "Succeeded or Failed"
Aug  2 09:21:38.348: INFO: Pod "pod-subpath-test-preprovisionedpv-w8r2": Phase="Pending", Reason="", readiness=false. Elapsed: 189.774524ms
Aug  2 09:21:40.539: INFO: Pod "pod-subpath-test-preprovisionedpv-w8r2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380140088s
Aug  2 09:21:42.729: INFO: Pod "pod-subpath-test-preprovisionedpv-w8r2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.570332807s
STEP: Saw pod success
Aug  2 09:21:42.729: INFO: Pod "pod-subpath-test-preprovisionedpv-w8r2" satisfied condition "Succeeded or Failed"
Aug  2 09:21:42.919: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-w8r2 container test-container-subpath-preprovisionedpv-w8r2: <nil>
STEP: delete the pod
Aug  2 09:21:43.308: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w8r2 to disappear
Aug  2 09:21:43.498: INFO: Pod pod-subpath-test-preprovisionedpv-w8r2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-w8r2
Aug  2 09:21:43.498: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w8r2" in namespace "provisioning-2194"
STEP: Creating pod pod-subpath-test-preprovisionedpv-w8r2
STEP: Creating a pod to test subpath
Aug  2 09:21:43.883: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w8r2" in namespace "provisioning-2194" to be "Succeeded or Failed"
Aug  2 09:21:44.073: INFO: Pod "pod-subpath-test-preprovisionedpv-w8r2": Phase="Pending", Reason="", readiness=false. Elapsed: 190.176417ms
Aug  2 09:21:46.263: INFO: Pod "pod-subpath-test-preprovisionedpv-w8r2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.3801972s
STEP: Saw pod success
Aug  2 09:21:46.263: INFO: Pod "pod-subpath-test-preprovisionedpv-w8r2" satisfied condition "Succeeded or Failed"
Aug  2 09:21:46.453: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-w8r2 container test-container-subpath-preprovisionedpv-w8r2: <nil>
STEP: delete the pod
Aug  2 09:21:46.855: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w8r2 to disappear
Aug  2 09:21:47.047: INFO: Pod pod-subpath-test-preprovisionedpv-w8r2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-w8r2
Aug  2 09:21:47.047: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w8r2" in namespace "provisioning-2194"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:53.554: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 101 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should not deadlock when a pod's predecessor fails
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:21:55.387: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 66 lines ...
Aug  2 09:21:53.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug  2 09:21:54.720: INFO: Waiting up to 5m0s for pod "pod-2559792e-c544-438a-85d4-90135d21557c" in namespace "emptydir-9975" to be "Succeeded or Failed"
Aug  2 09:21:54.910: INFO: Pod "pod-2559792e-c544-438a-85d4-90135d21557c": Phase="Pending", Reason="", readiness=false. Elapsed: 189.759775ms
Aug  2 09:21:57.101: INFO: Pod "pod-2559792e-c544-438a-85d4-90135d21557c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.379952487s
STEP: Saw pod success
Aug  2 09:21:57.101: INFO: Pod "pod-2559792e-c544-438a-85d4-90135d21557c" satisfied condition "Succeeded or Failed"
Aug  2 09:21:57.294: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-2559792e-c544-438a-85d4-90135d21557c container test-container: <nil>
STEP: delete the pod
Aug  2 09:21:57.684: INFO: Waiting for pod pod-2559792e-c544-438a-85d4-90135d21557c to disappear
Aug  2 09:21:57.874: INFO: Pod pod-2559792e-c544-438a-85d4-90135d21557c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:21:57.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9975" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Aug  2 09:21:48.678: INFO: PersistentVolumeClaim pvc-t9tsf found but phase is Pending instead of Bound.
Aug  2 09:21:50.868: INFO: PersistentVolumeClaim pvc-t9tsf found and phase=Bound (4.570520407s)
Aug  2 09:21:50.868: INFO: Waiting up to 3m0s for PersistentVolume local-mtp8l to have phase Bound
Aug  2 09:21:51.058: INFO: PersistentVolume local-mtp8l found and phase=Bound (189.614895ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zb5k
STEP: Creating a pod to test subpath
Aug  2 09:21:51.629: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zb5k" in namespace "provisioning-4075" to be "Succeeded or Failed"
Aug  2 09:21:51.820: INFO: Pod "pod-subpath-test-preprovisionedpv-zb5k": Phase="Pending", Reason="", readiness=false. Elapsed: 190.143345ms
Aug  2 09:21:54.011: INFO: Pod "pod-subpath-test-preprovisionedpv-zb5k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381042873s
Aug  2 09:21:56.201: INFO: Pod "pod-subpath-test-preprovisionedpv-zb5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.571589616s
STEP: Saw pod success
Aug  2 09:21:56.201: INFO: Pod "pod-subpath-test-preprovisionedpv-zb5k" satisfied condition "Succeeded or Failed"
Aug  2 09:21:56.393: INFO: Trying to get logs from node ip-172-20-47-13.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-zb5k container test-container-volume-preprovisionedpv-zb5k: <nil>
STEP: delete the pod
Aug  2 09:21:56.792: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zb5k to disappear
Aug  2 09:21:56.982: INFO: Pod pod-subpath-test-preprovisionedpv-zb5k no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zb5k
Aug  2 09:21:56.982: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zb5k" in namespace "provisioning-4075"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:21:55.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-cc48784c-1baf-4e32-b6f8-4f797ac8dcfd
STEP: Creating a pod to test consume configMaps
Aug  2 09:21:56.757: INFO: Waiting up to 5m0s for pod "pod-configmaps-99f39740-31f1-4be8-9caa-2af2fbafda12" in namespace "configmap-9286" to be "Succeeded or Failed"
Aug  2 09:21:56.946: INFO: Pod "pod-configmaps-99f39740-31f1-4be8-9caa-2af2fbafda12": Phase="Pending", Reason="", readiness=false. Elapsed: 188.988817ms
Aug  2 09:21:59.136: INFO: Pod "pod-configmaps-99f39740-31f1-4be8-9caa-2af2fbafda12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378718181s
STEP: Saw pod success
Aug  2 09:21:59.136: INFO: Pod "pod-configmaps-99f39740-31f1-4be8-9caa-2af2fbafda12" satisfied condition "Succeeded or Failed"
Aug  2 09:21:59.325: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-configmaps-99f39740-31f1-4be8-9caa-2af2fbafda12 container agnhost-container: <nil>
STEP: delete the pod
Aug  2 09:21:59.725: INFO: Waiting for pod pod-configmaps-99f39740-31f1-4be8-9caa-2af2fbafda12 to disappear
Aug  2 09:21:59.915: INFO: Pod pod-configmaps-99f39740-31f1-4be8-9caa-2af2fbafda12 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:21:59.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9286" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:00.346: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Aug  2 09:21:49.703: INFO: PersistentVolumeClaim pvc-47b8p found but phase is Pending instead of Bound.
Aug  2 09:21:51.891: INFO: PersistentVolumeClaim pvc-47b8p found and phase=Bound (15.511614756s)
Aug  2 09:21:51.891: INFO: Waiting up to 3m0s for PersistentVolume local-fgsrd to have phase Bound
Aug  2 09:21:52.080: INFO: PersistentVolume local-fgsrd found and phase=Bound (188.576036ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tmwv
STEP: Creating a pod to test subpath
Aug  2 09:21:52.647: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tmwv" in namespace "provisioning-5594" to be "Succeeded or Failed"
Aug  2 09:21:52.836: INFO: Pod "pod-subpath-test-preprovisionedpv-tmwv": Phase="Pending", Reason="", readiness=false. Elapsed: 189.193967ms
Aug  2 09:21:55.025: INFO: Pod "pod-subpath-test-preprovisionedpv-tmwv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378207756s
Aug  2 09:21:57.215: INFO: Pod "pod-subpath-test-preprovisionedpv-tmwv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.568120037s
STEP: Saw pod success
Aug  2 09:21:57.215: INFO: Pod "pod-subpath-test-preprovisionedpv-tmwv" satisfied condition "Succeeded or Failed"
Aug  2 09:21:57.404: INFO: Trying to get logs from node ip-172-20-47-13.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-tmwv container test-container-volume-preprovisionedpv-tmwv: <nil>
STEP: delete the pod
Aug  2 09:21:57.789: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tmwv to disappear
Aug  2 09:21:57.978: INFO: Pod pod-subpath-test-preprovisionedpv-tmwv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tmwv
Aug  2 09:21:57.978: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tmwv" in namespace "provisioning-5594"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:00.652: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:01.208: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 42 lines ...
Aug  2 09:21:49.176: INFO: PersistentVolumeClaim pvc-77kc7 found but phase is Pending instead of Bound.
Aug  2 09:21:51.365: INFO: PersistentVolumeClaim pvc-77kc7 found and phase=Bound (15.515741782s)
Aug  2 09:21:51.365: INFO: Waiting up to 3m0s for PersistentVolume local-c7rgv to have phase Bound
Aug  2 09:21:51.554: INFO: PersistentVolume local-c7rgv found and phase=Bound (189.231085ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bvpp
STEP: Creating a pod to test subpath
Aug  2 09:21:52.125: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bvpp" in namespace "provisioning-6778" to be "Succeeded or Failed"
Aug  2 09:21:52.315: INFO: Pod "pod-subpath-test-preprovisionedpv-bvpp": Phase="Pending", Reason="", readiness=false. Elapsed: 189.091918ms
Aug  2 09:21:54.504: INFO: Pod "pod-subpath-test-preprovisionedpv-bvpp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378878501s
Aug  2 09:21:56.694: INFO: Pod "pod-subpath-test-preprovisionedpv-bvpp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.568420614s
STEP: Saw pod success
Aug  2 09:21:56.694: INFO: Pod "pod-subpath-test-preprovisionedpv-bvpp" satisfied condition "Succeeded or Failed"
Aug  2 09:21:56.883: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-bvpp container test-container-subpath-preprovisionedpv-bvpp: <nil>
STEP: delete the pod
Aug  2 09:21:57.281: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bvpp to disappear
Aug  2 09:21:57.473: INFO: Pod pod-subpath-test-preprovisionedpv-bvpp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bvpp
Aug  2 09:21:57.473: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bvpp" in namespace "provisioning-6778"
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:03.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-4332" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":3,"skipped":44,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:03.774: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":27,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:21:51.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":8,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:05.188: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Aug  2 09:22:06.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d240f300-f4f5-49b1-843b-2b5b20b673b8" in namespace "downward-api-7247" to be "Succeeded or Failed"
Aug  2 09:22:06.545: INFO: Pod "downwardapi-volume-d240f300-f4f5-49b1-843b-2b5b20b673b8": Phase="Pending", Reason="", readiness=false. Elapsed: 189.778551ms
Aug  2 09:22:08.734: INFO: Pod "downwardapi-volume-d240f300-f4f5-49b1-843b-2b5b20b673b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.37869336s
STEP: Saw pod success
Aug  2 09:22:08.734: INFO: Pod "downwardapi-volume-d240f300-f4f5-49b1-843b-2b5b20b673b8" satisfied condition "Succeeded or Failed"
Aug  2 09:22:08.938: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod downwardapi-volume-d240f300-f4f5-49b1-843b-2b5b20b673b8 container client-container: <nil>
STEP: delete the pod
Aug  2 09:22:09.323: INFO: Waiting for pod downwardapi-volume-d240f300-f4f5-49b1-843b-2b5b20b673b8 to disappear
Aug  2 09:22:09.512: INFO: Pod downwardapi-volume-d240f300-f4f5-49b1-843b-2b5b20b673b8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:09.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7247" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:09.916: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 92 lines ...
• [SLOW TEST:10.011 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Aug  2 09:21:49.227: INFO: PersistentVolumeClaim pvc-rq89r found but phase is Pending instead of Bound.
Aug  2 09:21:51.416: INFO: PersistentVolumeClaim pvc-rq89r found and phase=Bound (13.323388207s)
Aug  2 09:21:51.416: INFO: Waiting up to 3m0s for PersistentVolume local-c87hc to have phase Bound
Aug  2 09:21:51.606: INFO: PersistentVolume local-c87hc found and phase=Bound (189.54409ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nsq9
STEP: Creating a pod to test subpath
Aug  2 09:21:52.174: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nsq9" in namespace "provisioning-123" to be "Succeeded or Failed"
Aug  2 09:21:52.363: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Pending", Reason="", readiness=false. Elapsed: 189.161938ms
Aug  2 09:21:54.553: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378504749s
Aug  2 09:21:56.742: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567971742s
Aug  2 09:21:58.931: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.757186656s
Aug  2 09:22:01.122: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.948455591s
STEP: Saw pod success
Aug  2 09:22:01.123: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9" satisfied condition "Succeeded or Failed"
Aug  2 09:22:01.311: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-nsq9 container test-container-subpath-preprovisionedpv-nsq9: <nil>
STEP: delete the pod
Aug  2 09:22:01.713: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nsq9 to disappear
Aug  2 09:22:01.902: INFO: Pod pod-subpath-test-preprovisionedpv-nsq9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nsq9
Aug  2 09:22:01.902: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nsq9" in namespace "provisioning-123"
STEP: Creating pod pod-subpath-test-preprovisionedpv-nsq9
STEP: Creating a pod to test subpath
Aug  2 09:22:02.289: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nsq9" in namespace "provisioning-123" to be "Succeeded or Failed"
Aug  2 09:22:02.478: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Pending", Reason="", readiness=false. Elapsed: 188.793708ms
Aug  2 09:22:04.668: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379580764s
Aug  2 09:22:06.860: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570691611s
Aug  2 09:22:09.048: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.759587785s
STEP: Saw pod success
Aug  2 09:22:09.049: INFO: Pod "pod-subpath-test-preprovisionedpv-nsq9" satisfied condition "Succeeded or Failed"
Aug  2 09:22:09.237: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-nsq9 container test-container-subpath-preprovisionedpv-nsq9: <nil>
STEP: delete the pod
Aug  2 09:22:09.627: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nsq9 to disappear
Aug  2 09:22:09.818: INFO: Pod pod-subpath-test-preprovisionedpv-nsq9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nsq9
Aug  2 09:22:09.818: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nsq9" in namespace "provisioning-123"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:12.468: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:13.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5616" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:13.605: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 71 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-076811ba-e50c-465c-b60a-732899b49713
STEP: Creating a pod to test consume configMaps
Aug  2 09:22:14.954: INFO: Waiting up to 5m0s for pod "pod-configmaps-df049bd3-812a-4e58-9dda-56317542641a" in namespace "configmap-646" to be "Succeeded or Failed"
Aug  2 09:22:15.143: INFO: Pod "pod-configmaps-df049bd3-812a-4e58-9dda-56317542641a": Phase="Pending", Reason="", readiness=false. Elapsed: 189.138947ms
Aug  2 09:22:17.333: INFO: Pod "pod-configmaps-df049bd3-812a-4e58-9dda-56317542641a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378732691s
STEP: Saw pod success
Aug  2 09:22:17.333: INFO: Pod "pod-configmaps-df049bd3-812a-4e58-9dda-56317542641a" satisfied condition "Succeeded or Failed"
Aug  2 09:22:17.522: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-configmaps-df049bd3-812a-4e58-9dda-56317542641a container agnhost-container: <nil>
STEP: delete the pod
Aug  2 09:22:17.917: INFO: Waiting for pod pod-configmaps-df049bd3-812a-4e58-9dda-56317542641a to disappear
Aug  2 09:22:18.106: INFO: Pod pod-configmaps-df049bd3-812a-4e58-9dda-56317542641a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:18.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-646" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:18.509: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 15 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:22:09.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:19.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2087" for this suite.


• [SLOW TEST:9.899 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":10,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:19.869: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":14,"skipped":63,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":8,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:20.318: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 37 lines ...
• [SLOW TEST:21.441 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:778
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":8,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:21.063: INFO: Only supported for providers [openstack] (not aws)
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:236

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":3,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:21:47.093: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Aug  2 09:21:48.045: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8767-aws-scsxb6x
STEP: creating a claim
Aug  2 09:21:48.236: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-bnff
STEP: Creating a pod to test subpath
Aug  2 09:21:48.809: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bnff" in namespace "provisioning-8767" to be "Succeeded or Failed"
Aug  2 09:21:48.999: INFO: Pod "pod-subpath-test-dynamicpv-bnff": Phase="Pending", Reason="", readiness=false. Elapsed: 189.659463ms
Aug  2 09:21:51.189: INFO: Pod "pod-subpath-test-dynamicpv-bnff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379933907s
Aug  2 09:21:53.380: INFO: Pod "pod-subpath-test-dynamicpv-bnff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570368517s
Aug  2 09:21:55.570: INFO: Pod "pod-subpath-test-dynamicpv-bnff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76057082s
Aug  2 09:21:57.760: INFO: Pod "pod-subpath-test-dynamicpv-bnff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.950878909s
Aug  2 09:21:59.953: INFO: Pod "pod-subpath-test-dynamicpv-bnff": Phase="Pending", Reason="", readiness=false. Elapsed: 11.143477028s
Aug  2 09:22:02.143: INFO: Pod "pod-subpath-test-dynamicpv-bnff": Phase="Pending", Reason="", readiness=false. Elapsed: 13.333774389s
Aug  2 09:22:04.334: INFO: Pod "pod-subpath-test-dynamicpv-bnff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.524331508s
STEP: Saw pod success
Aug  2 09:22:04.334: INFO: Pod "pod-subpath-test-dynamicpv-bnff" satisfied condition "Succeeded or Failed"
Aug  2 09:22:04.524: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-bnff container test-container-volume-dynamicpv-bnff: <nil>
STEP: delete the pod
Aug  2 09:22:04.911: INFO: Waiting for pod pod-subpath-test-dynamicpv-bnff to disappear
Aug  2 09:22:05.100: INFO: Pod pod-subpath-test-dynamicpv-bnff no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-bnff
Aug  2 09:22:05.101: INFO: Deleting pod "pod-subpath-test-dynamicpv-bnff" in namespace "provisioning-8767"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:22.414: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 50 lines ...
• [SLOW TEST:8.968 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":9,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:30.061: INFO: Only supported for providers [openstack] (not aws)
... skipping 50 lines ...
• [SLOW TEST:12.613 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":11,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:32.524: INFO: Only supported for providers [gce gke] (not aws)
... skipping 181 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-68762179-d67e-493c-9766-498fc3fa1037
STEP: Creating a pod to test consume configMaps
Aug  2 09:22:31.404: INFO: Waiting up to 5m0s for pod "pod-configmaps-ff5a7baa-ec57-4626-8346-e68c7940320c" in namespace "configmap-9639" to be "Succeeded or Failed"
Aug  2 09:22:31.594: INFO: Pod "pod-configmaps-ff5a7baa-ec57-4626-8346-e68c7940320c": Phase="Pending", Reason="", readiness=false. Elapsed: 189.934031ms
Aug  2 09:22:33.785: INFO: Pod "pod-configmaps-ff5a7baa-ec57-4626-8346-e68c7940320c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.380606858s
STEP: Saw pod success
Aug  2 09:22:33.785: INFO: Pod "pod-configmaps-ff5a7baa-ec57-4626-8346-e68c7940320c" satisfied condition "Succeeded or Failed"
Aug  2 09:22:33.975: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-configmaps-ff5a7baa-ec57-4626-8346-e68c7940320c container agnhost-container: <nil>
STEP: delete the pod
Aug  2 09:22:34.364: INFO: Waiting for pod pod-configmaps-ff5a7baa-ec57-4626-8346-e68c7940320c to disappear
Aug  2 09:22:34.554: INFO: Pod pod-configmaps-ff5a7baa-ec57-4626-8346-e68c7940320c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:34.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9639" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":67,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:34.964: INFO: Only supported for providers [vsphere] (not aws)
... skipping 158 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-16834e45-4810-4795-8005-d2b68960e89c
STEP: Creating a pod to test consume secrets
Aug  2 09:22:33.992: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a50592a8-c808-4172-a1f0-a8516ccf135e" in namespace "projected-4771" to be "Succeeded or Failed"
Aug  2 09:22:34.181: INFO: Pod "pod-projected-secrets-a50592a8-c808-4172-a1f0-a8516ccf135e": Phase="Pending", Reason="", readiness=false. Elapsed: 188.510062ms
Aug  2 09:22:36.370: INFO: Pod "pod-projected-secrets-a50592a8-c808-4172-a1f0-a8516ccf135e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.377828801s
STEP: Saw pod success
Aug  2 09:22:36.370: INFO: Pod "pod-projected-secrets-a50592a8-c808-4172-a1f0-a8516ccf135e" satisfied condition "Succeeded or Failed"
Aug  2 09:22:36.559: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-projected-secrets-a50592a8-c808-4172-a1f0-a8516ccf135e container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug  2 09:22:36.949: INFO: Waiting for pod pod-projected-secrets-a50592a8-c808-4172-a1f0-a8516ccf135e to disappear
Aug  2 09:22:37.138: INFO: Pod pod-projected-secrets-a50592a8-c808-4172-a1f0-a8516ccf135e no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:37.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4771" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":76,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:39.463 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:920
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":8,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:37.743: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":11,"skipped":84,"failed":0}
[BeforeEach] [sig-scheduling] Multi-AZ Clusters
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:22:36.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename multi-az
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
  Zone count is 1, only run for multi-zone clusters, skipping test

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:55
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":55,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:21:35.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:310
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:332
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":9,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:38.650: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 148 lines ...
• [SLOW TEST:20.702 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":15,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:40.968: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 124 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:666
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:681
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:7.261 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:45.788: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 48 lines ...
• [SLOW TEST:11.555 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":9,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:22:43.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-fd972fda-255f-4993-87e9-8b22f6786d82
STEP: Creating a pod to test consume configMaps
Aug  2 09:22:44.993: INFO: Waiting up to 5m0s for pod "pod-configmaps-c5389c23-9532-4e1f-a460-21072d8237a2" in namespace "configmap-4482" to be "Succeeded or Failed"
Aug  2 09:22:45.183: INFO: Pod "pod-configmaps-c5389c23-9532-4e1f-a460-21072d8237a2": Phase="Pending", Reason="", readiness=false. Elapsed: 190.479906ms
Aug  2 09:22:47.373: INFO: Pod "pod-configmaps-c5389c23-9532-4e1f-a460-21072d8237a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38039382s
Aug  2 09:22:49.563: INFO: Pod "pod-configmaps-c5389c23-9532-4e1f-a460-21072d8237a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.570049066s
STEP: Saw pod success
Aug  2 09:22:49.563: INFO: Pod "pod-configmaps-c5389c23-9532-4e1f-a460-21072d8237a2" satisfied condition "Succeeded or Failed"
Aug  2 09:22:49.752: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-configmaps-c5389c23-9532-4e1f-a460-21072d8237a2 container agnhost-container: <nil>
STEP: delete the pod
Aug  2 09:22:50.143: INFO: Waiting for pod pod-configmaps-c5389c23-9532-4e1f-a460-21072d8237a2 to disappear
Aug  2 09:22:50.332: INFO: Pod pod-configmaps-c5389c23-9532-4e1f-a460-21072d8237a2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:52.657: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 55 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":9,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:22:50.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:52.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6081" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":6,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:52.856: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 17 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:22:49.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:139
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:22:52.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5109" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":10,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:53.075: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 128 lines ...
• [SLOW TEST:9.073 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":13,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 59 lines ...
Aug  2 09:21:29.933: INFO: PersistentVolumeClaim csi-hostpathnjxdw found but phase is Pending instead of Bound.
Aug  2 09:21:32.125: INFO: PersistentVolumeClaim csi-hostpathnjxdw found but phase is Pending instead of Bound.
Aug  2 09:21:34.317: INFO: PersistentVolumeClaim csi-hostpathnjxdw found but phase is Pending instead of Bound.
Aug  2 09:21:36.509: INFO: PersistentVolumeClaim csi-hostpathnjxdw found and phase=Bound (22.114953088s)
STEP: Creating pod pod-subpath-test-dynamicpv-4mz4
STEP: Creating a pod to test atomic-volume-subpath
Aug  2 09:21:37.088: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-4mz4" in namespace "provisioning-2246" to be "Succeeded or Failed"
Aug  2 09:21:37.281: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 192.446781ms
Aug  2 09:21:39.473: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384481827s
Aug  2 09:21:41.665: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576589317s
Aug  2 09:21:43.857: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.768983321s
Aug  2 09:21:46.051: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963376091s
Aug  2 09:21:48.244: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Running", Reason="", readiness=true. Elapsed: 11.155618728s
... skipping 6 lines ...
Aug  2 09:22:03.589: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Running", Reason="", readiness=true. Elapsed: 26.501200099s
Aug  2 09:22:05.782: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Running", Reason="", readiness=true. Elapsed: 28.69415721s
Aug  2 09:22:07.975: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Running", Reason="", readiness=true. Elapsed: 30.886512118s
Aug  2 09:22:10.167: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Running", Reason="", readiness=true. Elapsed: 33.079251221s
Aug  2 09:22:12.360: INFO: Pod "pod-subpath-test-dynamicpv-4mz4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.272127692s
STEP: Saw pod success
Aug  2 09:22:12.360: INFO: Pod "pod-subpath-test-dynamicpv-4mz4" satisfied condition "Succeeded or Failed"
Aug  2 09:22:12.552: INFO: Trying to get logs from node ip-172-20-56-163.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-4mz4 container test-container-subpath-dynamicpv-4mz4: <nil>
STEP: delete the pod
Aug  2 09:22:13.023: INFO: Waiting for pod pod-subpath-test-dynamicpv-4mz4 to disappear
Aug  2 09:22:13.215: INFO: Pod pod-subpath-test-dynamicpv-4mz4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-4mz4
Aug  2 09:22:13.215: INFO: Deleting pod "pod-subpath-test-dynamicpv-4mz4" in namespace "provisioning-2246"
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":31,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
Aug  2 09:22:56.128: INFO: AfterEach: Cleaning up test resources.
Aug  2 09:22:56.128: INFO: Deleting PersistentVolumeClaim "pvc-4tm2n"
Aug  2 09:22:56.317: INFO: Deleting PersistentVolume "hostpath-g84pv"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":7,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:56.527: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 62 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":9,"skipped":94,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:22:58.436: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 178 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:01.230: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 166 lines ...
• [SLOW TEST:61.798 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1562
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":4,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
Aug  2 09:22:39.874: INFO: Unable to read jessie_udp@dns-test-service.dns-7425 from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:40.063: INFO: Unable to read jessie_tcp@dns-test-service.dns-7425 from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:40.253: INFO: Unable to read jessie_udp@dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:40.443: INFO: Unable to read jessie_tcp@dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:40.632: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:40.822: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:41.966: INFO: Lookups using dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7425 wheezy_tcp@dns-test-service.dns-7425 wheezy_udp@dns-test-service.dns-7425.svc wheezy_tcp@dns-test-service.dns-7425.svc wheezy_udp@_http._tcp.dns-test-service.dns-7425.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7425.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7425 jessie_tcp@dns-test-service.dns-7425 jessie_udp@dns-test-service.dns-7425.svc jessie_tcp@dns-test-service.dns-7425.svc jessie_udp@_http._tcp.dns-test-service.dns-7425.svc jessie_tcp@_http._tcp.dns-test-service.dns-7425.svc]

Aug  2 09:22:47.159: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:47.349: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:47.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-7425 from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:47.730: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7425 from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:47.920: INFO: Unable to read wheezy_udp@dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
... skipping 5 lines ...
Aug  2 09:22:50.208: INFO: Unable to read jessie_udp@dns-test-service.dns-7425 from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:50.399: INFO: Unable to read jessie_tcp@dns-test-service.dns-7425 from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:50.588: INFO: Unable to read jessie_udp@dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:50.778: INFO: Unable to read jessie_tcp@dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:50.971: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:51.164: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7425.svc from pod dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7: the server could not find the requested resource (get pods dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7)
Aug  2 09:22:52.427: INFO: Lookups using dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7425 wheezy_tcp@dns-test-service.dns-7425 wheezy_udp@dns-test-service.dns-7425.svc wheezy_tcp@dns-test-service.dns-7425.svc wheezy_udp@_http._tcp.dns-test-service.dns-7425.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7425.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7425 jessie_tcp@dns-test-service.dns-7425 jessie_udp@dns-test-service.dns-7425.svc jessie_tcp@dns-test-service.dns-7425.svc jessie_udp@_http._tcp.dns-test-service.dns-7425.svc jessie_tcp@_http._tcp.dns-test-service.dns-7425.svc]

Aug  2 09:23:02.349: INFO: DNS probes using dns-7425/dns-test-15ee77a0-56eb-49c4-9dd6-454af34ebec7 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:44.811 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:03.351: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
Aug  2 09:22:53.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
Aug  2 09:22:54.096: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug  2 09:22:54.486: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9637" in namespace "provisioning-9637" to be "Succeeded or Failed"
Aug  2 09:22:54.676: INFO: Pod "hostpath-symlink-prep-provisioning-9637": Phase="Pending", Reason="", readiness=false. Elapsed: 190.074428ms
Aug  2 09:22:56.867: INFO: Pod "hostpath-symlink-prep-provisioning-9637": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.380287655s
STEP: Saw pod success
Aug  2 09:22:56.867: INFO: Pod "hostpath-symlink-prep-provisioning-9637" satisfied condition "Succeeded or Failed"
Aug  2 09:22:56.867: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9637" in namespace "provisioning-9637"
Aug  2 09:22:57.060: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9637" to be fully deleted
Aug  2 09:22:57.250: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7k9r
STEP: Creating a pod to test subpath
Aug  2 09:22:57.441: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7k9r" in namespace "provisioning-9637" to be "Succeeded or Failed"
Aug  2 09:22:57.631: INFO: Pod "pod-subpath-test-inlinevolume-7k9r": Phase="Pending", Reason="", readiness=false. Elapsed: 189.790151ms
Aug  2 09:22:59.823: INFO: Pod "pod-subpath-test-inlinevolume-7k9r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.382383653s
STEP: Saw pod success
Aug  2 09:22:59.823: INFO: Pod "pod-subpath-test-inlinevolume-7k9r" satisfied condition "Succeeded or Failed"
Aug  2 09:23:00.014: INFO: Trying to get logs from node ip-172-20-47-13.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-7k9r container test-container-volume-inlinevolume-7k9r: <nil>
STEP: delete the pod
Aug  2 09:23:00.404: INFO: Waiting for pod pod-subpath-test-inlinevolume-7k9r to disappear
Aug  2 09:23:00.594: INFO: Pod pod-subpath-test-inlinevolume-7k9r no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-7k9r
Aug  2 09:23:00.594: INFO: Deleting pod "pod-subpath-test-inlinevolume-7k9r" in namespace "provisioning-9637"
STEP: Deleting pod
Aug  2 09:23:00.784: INFO: Deleting pod "pod-subpath-test-inlinevolume-7k9r" in namespace "provisioning-9637"
Aug  2 09:23:01.164: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9637" in namespace "provisioning-9637" to be "Succeeded or Failed"
Aug  2 09:23:01.355: INFO: Pod "hostpath-symlink-prep-provisioning-9637": Phase="Pending", Reason="", readiness=false. Elapsed: 190.039729ms
Aug  2 09:23:03.545: INFO: Pod "hostpath-symlink-prep-provisioning-9637": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.380494333s
STEP: Saw pod success
Aug  2 09:23:03.545: INFO: Pod "hostpath-symlink-prep-provisioning-9637" satisfied condition "Succeeded or Failed"
Aug  2 09:23:03.545: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9637" in namespace "provisioning-9637"
Aug  2 09:23:03.741: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9637" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:03.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9637" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":11,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:04.333: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 401 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:06.495: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 127 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1410
    should modify fsGroup if fsGroupPolicy=default
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1434
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":6,"skipped":96,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:07.092: INFO: Only supported for providers [vsphere] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Aug  2 09:23:04.508: INFO: Waiting up to 5m0s for pod "metadata-volume-cdfd77b1-e297-4d7f-aa24-debe65d52e89" in namespace "projected-7403" to be "Succeeded or Failed"
Aug  2 09:23:04.699: INFO: Pod "metadata-volume-cdfd77b1-e297-4d7f-aa24-debe65d52e89": Phase="Pending", Reason="", readiness=false. Elapsed: 190.257177ms
Aug  2 09:23:06.888: INFO: Pod "metadata-volume-cdfd77b1-e297-4d7f-aa24-debe65d52e89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.379234582s
STEP: Saw pod success
Aug  2 09:23:06.888: INFO: Pod "metadata-volume-cdfd77b1-e297-4d7f-aa24-debe65d52e89" satisfied condition "Succeeded or Failed"
Aug  2 09:23:07.077: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod metadata-volume-cdfd77b1-e297-4d7f-aa24-debe65d52e89 container client-container: <nil>
STEP: delete the pod
Aug  2 09:23:07.466: INFO: Waiting for pod metadata-volume-cdfd77b1-e297-4d7f-aa24-debe65d52e89 to disappear
Aug  2 09:23:07.661: INFO: Pod metadata-volume-cdfd77b1-e297-4d7f-aa24-debe65d52e89 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:07.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7403" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:08.051: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 99 lines ...
• [SLOW TEST:30.131 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":10,"skipped":70,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:08.904: INFO: Only supported for providers [gce gke] (not aws)
... skipping 105 lines ...
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug  2 09:23:06.037: INFO: start=2021-08-02 09:23:00.834642881 +0000 UTC m=+254.660141223, now=2021-08-02 09:23:06.03765269 +0000 UTC m=+259.863151112, kubelet pod: {"metadata":{"name":"pod-submit-remove-44ed1731-cbbe-47d4-85b3-bab8c9238dd7","namespace":"pods-2676","uid":"2aa9d2b3-fb4c-4154-925f-90959420d2d1","resourceVersion":"10324","creationTimestamp":"2021-08-02T09:22:57Z","deletionTimestamp":"2021-08-02T09:23:30Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"497009614"},"annotations":{"kubernetes.io/config.seen":"2021-08-02T09:22:57.796778102Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-08-02T09:22:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-r2fmf","secret":{"secretName":"default-token-r2fmf","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-r2fmf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-56-163.ap-southeast-2.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-08-02T09:22:57Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-08-02T09:23:02Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-08-02T09:23:02Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-08-02T09:22:57Z"}],"hostIP":"172.20.56.163","podIP":"100.96.2.93","podIPs":[{"ip":"100.96.2.93"}],"startTime":"2021-08-02T09:22:57Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-08-02T09:22:58Z","finishedAt":"2021-08-02T09:23:01Z","containerID":"docker://8d0631bfe9a8bc05fb5faf3a1420d1cf46c7c035aee78bb90f60eb278aebed5d"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a","containerID":"docker://8d0631bfe9a8bc05fb5faf3a1420d1cf46c7c035aee78bb90f60eb278aebed5d","started":false}],"qosClass":"BestEffort"}}
Aug  2 09:23:11.029: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:11.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2676" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  [k8s.io] Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed","total":-1,"completed":8,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:11.613: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 61 lines ...
• [SLOW TEST:34.962 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:167
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service","total":-1,"completed":13,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:12.551: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:13.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5943" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":9,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:14.135: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:15.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7937" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":14,"skipped":88,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:23:11.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-ac82a225-f51a-4ec3-bbdf-afb37fc3b589
STEP: Creating a pod to test consume secrets
Aug  2 09:23:12.547: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1f0e2a58-26ca-4e6a-90a2-1da9c0f88b63" in namespace "projected-6046" to be "Succeeded or Failed"
Aug  2 09:23:12.739: INFO: Pod "pod-projected-secrets-1f0e2a58-26ca-4e6a-90a2-1da9c0f88b63": Phase="Pending", Reason="", readiness=false. Elapsed: 191.684538ms
Aug  2 09:23:14.932: INFO: Pod "pod-projected-secrets-1f0e2a58-26ca-4e6a-90a2-1da9c0f88b63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.384901104s
STEP: Saw pod success
Aug  2 09:23:14.932: INFO: Pod "pod-projected-secrets-1f0e2a58-26ca-4e6a-90a2-1da9c0f88b63" satisfied condition "Succeeded or Failed"
Aug  2 09:23:15.125: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-projected-secrets-1f0e2a58-26ca-4e6a-90a2-1da9c0f88b63 container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug  2 09:23:15.518: INFO: Waiting for pod pod-projected-secrets-1f0e2a58-26ca-4e6a-90a2-1da9c0f88b63 to disappear
Aug  2 09:23:15.719: INFO: Pod pod-projected-secrets-1f0e2a58-26ca-4e6a-90a2-1da9c0f88b63 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:15.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6046" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":41,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:16.165: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 99 lines ...
• [SLOW TEST:11.398 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":7,"skipped":100,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:18.520: INFO: Only supported for providers [gce gke] (not aws)
... skipping 115 lines ...
Aug  2 09:23:16.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
Aug  2 09:23:17.343: INFO: Waiting up to 5m0s for pod "downward-api-2a3065f3-eac0-4227-9f9d-cd008dae22e2" in namespace "downward-api-6972" to be "Succeeded or Failed"
Aug  2 09:23:17.535: INFO: Pod "downward-api-2a3065f3-eac0-4227-9f9d-cd008dae22e2": Phase="Pending", Reason="", readiness=false. Elapsed: 192.335918ms
Aug  2 09:23:19.728: INFO: Pod "downward-api-2a3065f3-eac0-4227-9f9d-cd008dae22e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.38484177s
STEP: Saw pod success
Aug  2 09:23:19.728: INFO: Pod "downward-api-2a3065f3-eac0-4227-9f9d-cd008dae22e2" satisfied condition "Succeeded or Failed"
Aug  2 09:23:19.920: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod downward-api-2a3065f3-eac0-4227-9f9d-cd008dae22e2 container dapi-container: <nil>
STEP: delete the pod
Aug  2 09:23:20.313: INFO: Waiting for pod downward-api-2a3065f3-eac0-4227-9f9d-cd008dae22e2 to disappear
Aug  2 09:23:20.504: INFO: Pod downward-api-2a3065f3-eac0-4227-9f9d-cd008dae22e2 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 22 lines ...
• [SLOW TEST:5.051 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":15,"skipped":89,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:21.074: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:241

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":4,"skipped":50,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:22:14.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 39 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should implement legacy replacement when the update strategy is OnDelete
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:499
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":5,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:22.370: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 48 lines ...
• [SLOW TEST:21.125 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":17,"skipped":72,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:27.663: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 75 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Container restart
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130
    should verify that container can restart successfully after configmaps modified
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":7,"skipped":36,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:28.241: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 118 lines ...
• [SLOW TEST:22.299 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":8,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:30.389: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 56 lines ...
Aug  2 09:23:19.828: INFO: PersistentVolumeClaim pvc-884q7 found but phase is Pending instead of Bound.
Aug  2 09:23:22.017: INFO: PersistentVolumeClaim pvc-884q7 found and phase=Bound (2.37908279s)
Aug  2 09:23:22.017: INFO: Waiting up to 3m0s for PersistentVolume local-j527j to have phase Bound
Aug  2 09:23:22.207: INFO: PersistentVolume local-j527j found and phase=Bound (189.66577ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8nsd
STEP: Creating a pod to test subpath
Aug  2 09:23:22.777: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8nsd" in namespace "provisioning-633" to be "Succeeded or Failed"
Aug  2 09:23:22.966: INFO: Pod "pod-subpath-test-preprovisionedpv-8nsd": Phase="Pending", Reason="", readiness=false. Elapsed: 189.469652ms
Aug  2 09:23:25.156: INFO: Pod "pod-subpath-test-preprovisionedpv-8nsd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379363351s
Aug  2 09:23:27.346: INFO: Pod "pod-subpath-test-preprovisionedpv-8nsd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.569424566s
STEP: Saw pod success
Aug  2 09:23:27.346: INFO: Pod "pod-subpath-test-preprovisionedpv-8nsd" satisfied condition "Succeeded or Failed"
Aug  2 09:23:27.536: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-8nsd container test-container-subpath-preprovisionedpv-8nsd: <nil>
STEP: delete the pod
Aug  2 09:23:27.925: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8nsd to disappear
Aug  2 09:23:28.114: INFO: Pod pod-subpath-test-preprovisionedpv-8nsd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8nsd
Aug  2 09:23:28.114: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8nsd" in namespace "provisioning-633"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:30.741: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 311 lines ...
• [SLOW TEST:5.816 seconds]
[sig-api-machinery] Discovery
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should validate PreferredVersion for each APIGroup [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":8,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:13.887 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:36.294: INFO: Only supported for providers [azure] (not aws)
... skipping 70 lines ...
Aug  2 09:23:03.120: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Aug  2 09:23:04.291: INFO: Successfully created a new PD: "aws://ap-southeast-2a/vol-034138911e2489d7f".
Aug  2 09:23:04.291: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-dnrh
STEP: Creating a pod to test exec-volume-test
Aug  2 09:23:04.482: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-dnrh" in namespace "volume-5256" to be "Succeeded or Failed"
Aug  2 09:23:04.671: INFO: Pod "exec-volume-test-inlinevolume-dnrh": Phase="Pending", Reason="", readiness=false. Elapsed: 189.224533ms
Aug  2 09:23:06.861: INFO: Pod "exec-volume-test-inlinevolume-dnrh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379130659s
Aug  2 09:23:09.050: INFO: Pod "exec-volume-test-inlinevolume-dnrh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568359153s
Aug  2 09:23:11.241: INFO: Pod "exec-volume-test-inlinevolume-dnrh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.759004549s
Aug  2 09:23:13.431: INFO: Pod "exec-volume-test-inlinevolume-dnrh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.948971397s
Aug  2 09:23:15.621: INFO: Pod "exec-volume-test-inlinevolume-dnrh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.138873848s
STEP: Saw pod success
Aug  2 09:23:15.621: INFO: Pod "exec-volume-test-inlinevolume-dnrh" satisfied condition "Succeeded or Failed"
Aug  2 09:23:15.828: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod exec-volume-test-inlinevolume-dnrh container exec-container-inlinevolume-dnrh: <nil>
STEP: delete the pod
Aug  2 09:23:16.326: INFO: Waiting for pod exec-volume-test-inlinevolume-dnrh to disappear
Aug  2 09:23:16.514: INFO: Pod exec-volume-test-inlinevolume-dnrh no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-dnrh
Aug  2 09:23:16.515: INFO: Deleting pod "exec-volume-test-inlinevolume-dnrh" in namespace "volume-5256"
Aug  2 09:23:17.018: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-034138911e2489d7f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-034138911e2489d7f is currently attached to i-018f26259876b0424
	status code: 400, request id: e15923f1-ee9e-45e0-8f75-24cee3e6f294
Aug  2 09:23:22.920: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-034138911e2489d7f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-034138911e2489d7f is currently attached to i-018f26259876b0424
	status code: 400, request id: 34c72adc-8e22-414c-baca-70a9f7dac47b
Aug  2 09:23:28.805: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-034138911e2489d7f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-034138911e2489d7f is currently attached to i-018f26259876b0424
	status code: 400, request id: 4c4cd722-450c-4be3-be7b-9f7d35c2ebab
Aug  2 09:23:35.020: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-034138911e2489d7f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-034138911e2489d7f is currently attached to i-018f26259876b0424
	status code: 400, request id: ba6d5dd1-8f68-4304-8682-8773e0788db1
Aug  2 09:23:40.929: INFO: Successfully deleted PD "aws://ap-southeast-2a/vol-034138911e2489d7f".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:40.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5256" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:41.334: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 91 lines ...
• [SLOW TEST:35.655 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":11,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:44.580: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 398 lines ...
Aug  2 09:23:35.498: INFO: PersistentVolumeClaim pvc-ct8bl found but phase is Pending instead of Bound.
Aug  2 09:23:37.689: INFO: PersistentVolumeClaim pvc-ct8bl found and phase=Bound (11.138087999s)
Aug  2 09:23:37.689: INFO: Waiting up to 3m0s for PersistentVolume local-wjsg7 to have phase Bound
Aug  2 09:23:37.879: INFO: PersistentVolume local-wjsg7 found and phase=Bound (189.568336ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n6nh
STEP: Creating a pod to test subpath
Aug  2 09:23:38.447: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n6nh" in namespace "provisioning-5118" to be "Succeeded or Failed"
Aug  2 09:23:38.636: INFO: Pod "pod-subpath-test-preprovisionedpv-n6nh": Phase="Pending", Reason="", readiness=false. Elapsed: 188.961378ms
Aug  2 09:23:40.830: INFO: Pod "pod-subpath-test-preprovisionedpv-n6nh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383256865s
Aug  2 09:23:43.020: INFO: Pod "pod-subpath-test-preprovisionedpv-n6nh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.573518719s
STEP: Saw pod success
Aug  2 09:23:43.021: INFO: Pod "pod-subpath-test-preprovisionedpv-n6nh" satisfied condition "Succeeded or Failed"
Aug  2 09:23:43.210: INFO: Trying to get logs from node ip-172-20-48-162.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-n6nh container test-container-volume-preprovisionedpv-n6nh: <nil>
STEP: delete the pod
Aug  2 09:23:43.600: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n6nh to disappear
Aug  2 09:23:43.789: INFO: Pod pod-subpath-test-preprovisionedpv-n6nh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n6nh
Aug  2 09:23:43.789: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n6nh" in namespace "provisioning-5118"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":16,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:47.707: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 46 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Aug  2 09:23:46.819: INFO: Waiting up to 5m0s for pod "client-envvars-967ea237-0c33-4398-a800-0d65da19c8fb" in namespace "pods-9854" to be "Succeeded or Failed"
Aug  2 09:23:47.008: INFO: Pod "client-envvars-967ea237-0c33-4398-a800-0d65da19c8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 188.893297ms
Aug  2 09:23:49.197: INFO: Pod "client-envvars-967ea237-0c33-4398-a800-0d65da19c8fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.378141446s
STEP: Saw pod success
Aug  2 09:23:49.198: INFO: Pod "client-envvars-967ea237-0c33-4398-a800-0d65da19c8fb" satisfied condition "Succeeded or Failed"
Aug  2 09:23:49.387: INFO: Trying to get logs from node ip-172-20-35-97.ap-southeast-2.compute.internal pod client-envvars-967ea237-0c33-4398-a800-0d65da19c8fb container env3cont: <nil>
STEP: delete the pod
Aug  2 09:23:49.779: INFO: Waiting for pod client-envvars-967ea237-0c33-4398-a800-0d65da19c8fb to disappear
Aug  2 09:23:49.968: INFO: Pod client-envvars-967ea237-0c33-4398-a800-0d65da19c8fb no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.634 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:50.360: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 40 lines ...
Aug  2 09:23:20.210: INFO: PersistentVolumeClaim pvc-dw6f4 found but phase is Pending instead of Bound.
Aug  2 09:23:22.400: INFO: PersistentVolumeClaim pvc-dw6f4 found and phase=Bound (11.148554151s)
Aug  2 09:23:22.400: INFO: Waiting up to 3m0s for PersistentVolume local-d4wnr to have phase Bound
Aug  2 09:23:22.590: INFO: PersistentVolume local-d4wnr found and phase=Bound (189.419472ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wdvh
STEP: Creating a pod to test atomic-volume-subpath
Aug  2 09:23:23.165: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wdvh" in namespace "provisioning-3787" to be "Succeeded or Failed"
Aug  2 09:23:23.355: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Pending", Reason="", readiness=false. Elapsed: 190.084838ms
Aug  2 09:23:25.545: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 2.380279798s
Aug  2 09:23:27.735: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 4.570437299s
Aug  2 09:23:29.926: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 6.761481382s
Aug  2 09:23:32.116: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 8.951424769s
Aug  2 09:23:34.307: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 11.142689558s
Aug  2 09:23:36.510: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 13.345513761s
Aug  2 09:23:38.700: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 15.53561354s
Aug  2 09:23:40.891: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 17.726041539s
Aug  2 09:23:43.084: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 19.918973669s
Aug  2 09:23:45.274: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Running", Reason="", readiness=true. Elapsed: 22.109694826s
Aug  2 09:23:47.465: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.300353963s
STEP: Saw pod success
Aug  2 09:23:47.465: INFO: Pod "pod-subpath-test-preprovisionedpv-wdvh" satisfied condition "Succeeded or Failed"
Aug  2 09:23:47.655: INFO: Trying to get logs from node ip-172-20-47-13.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-wdvh container test-container-subpath-preprovisionedpv-wdvh: <nil>
STEP: delete the pod
Aug  2 09:23:48.047: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wdvh to disappear
Aug  2 09:23:48.238: INFO: Pod pod-subpath-test-preprovisionedpv-wdvh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wdvh
Aug  2 09:23:48.238: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wdvh" in namespace "provisioning-3787"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":12,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Aug  2 09:23:52.134: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212
Aug  2 09:23:50.215: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-66161817-493a-492d-ad7f-fd850c54be32" in namespace "security-context-test-6665" to be "Succeeded or Failed"
Aug  2 09:23:50.404: INFO: Pod "busybox-readonly-true-66161817-493a-492d-ad7f-fd850c54be32": Phase="Pending", Reason="", readiness=false. Elapsed: 189.044036ms
Aug  2 09:23:52.595: INFO: Pod "busybox-readonly-true-66161817-493a-492d-ad7f-fd850c54be32": Phase="Failed", Reason="", readiness=false. Elapsed: 2.379884644s
Aug  2 09:23:52.595: INFO: Pod "busybox-readonly-true-66161817-493a-492d-ad7f-fd850c54be32" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:52.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6665" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":17,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:23:50.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-map-86c193da-2485-4ce3-8993-a40e0129a8b9
STEP: Creating a pod to test consume secrets
Aug  2 09:23:51.745: INFO: Waiting up to 5m0s for pod "pod-secrets-19aa4023-abb9-427a-a778-36e8dafa694c" in namespace "secrets-6362" to be "Succeeded or Failed"
Aug  2 09:23:51.934: INFO: Pod "pod-secrets-19aa4023-abb9-427a-a778-36e8dafa694c": Phase="Pending", Reason="", readiness=false. Elapsed: 189.333919ms
Aug  2 09:23:54.124: INFO: Pod "pod-secrets-19aa4023-abb9-427a-a778-36e8dafa694c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.379633823s
STEP: Saw pod success
Aug  2 09:23:54.124: INFO: Pod "pod-secrets-19aa4023-abb9-427a-a778-36e8dafa694c" satisfied condition "Succeeded or Failed"
Aug  2 09:23:54.314: INFO: Trying to get logs from node ip-172-20-47-13.ap-southeast-2.compute.internal pod pod-secrets-19aa4023-abb9-427a-a778-36e8dafa694c container secret-volume-test: <nil>
STEP: delete the pod
Aug  2 09:23:54.707: INFO: Waiting for pod pod-secrets-19aa4023-abb9-427a-a778-36e8dafa694c to disappear
Aug  2 09:23:54.896: INFO: Pod pod-secrets-19aa4023-abb9-427a-a778-36e8dafa694c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug  2 09:23:54.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6362" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":48,"failed":0}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug  2 09:23:20.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44635 lines ...






 correctly\nI0802 09:30:56.849567       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-55bcfa78-5fbc-4412-b966-8da7d023d61d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7011^4\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:30:56.851710       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-55bcfa78-5fbc-4412-b966-8da7d023d61d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7011^4\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:30:56.860518       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-55bcfa78-5fbc-4412-b966-8da7d023d61d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7011^4\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:30:56.866685       1 pv_controller_base.go:504] deletion of claim \"csi-mock-volumes-7011/pvc-9kxqm\" was already processed\nI0802 09:30:56.960824       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-1699\nI0802 09:30:57.307872       1 pvc_protection_controller.go:291] PVC volumemode-8893/pvc-wwrxs is unused\nI0802 09:30:57.314264       1 pv_controller.go:638] volume \"local-thv4j\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:30:57.317611       1 pv_controller.go:864] volume \"local-thv4j\" entered phase \"Released\"\nI0802 09:30:57.401590       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-8965/pod-93fe166f-cce6-4c26-9988-daa80d2ad018 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-rbm9n pvc- persistent-local-volumes-test-8965  8711bc0e-7ad3-40d8-8333-1908a1146dd1 26246 0 2021-08-02 09:30:49 +0000 UTC 2021-08-02 09:30:57 +0000 UTC 0xc001c82f48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:30:49 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:30:49 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvqrfnd,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-8965,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:30:57.401671       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-8965/pvc-rbm9n because it is still being used\nI0802 09:30:57.502810       1 pv_controller_base.go:504] deletion of claim \"volumemode-8893/pvc-wwrxs\" was already processed\nE0802 09:30:58.055496       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-4391/default: secrets \"default-token-dzfdf\" is forbidden: unable to create new content in namespace nettest-4391 because it is being terminated\nI0802 09:30:58.173172       1 namespace_controller.go:185] Namespace has been deleted projected-7508\nE0802 09:30:58.202080       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-3615/pvc-v99wq: storageclass.storage.k8s.io \"volume-3615\" not found\nI0802 09:30:58.202329       1 event.go:291] \"Event occurred\" object=\"volume-3615/pvc-v99wq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-3615\\\" not found\"\nI0802 09:30:58.370391       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent-1627896300\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0802 09:30:58.394262       1 pv_controller.go:864] volume \"local-ptnfw\" entered phase \"Available\"\nI0802 09:30:58.648376       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-8965/pod-93fe166f-cce6-4c26-9988-daa80d2ad018 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-rbm9n pvc- persistent-local-volumes-test-8965  8711bc0e-7ad3-40d8-8333-1908a1146dd1 26246 0 2021-08-02 09:30:49 +0000 UTC 2021-08-02 09:30:57 +0000 UTC 0xc001c82f48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:30:49 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:30:49 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvqrfnd,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-8965,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:30:58.648446       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-8965/pvc-rbm9n because it is still being used\nE0802 09:30:58.825071       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-8312/default: secrets \"default-token-fkzpq\" is forbidden: unable to create new content in namespace kubectl-8312 because it is being terminated\nI0802 09:30:58.832933       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-8312/agnhost-primary-gbktd\" objectUID=94339fe9-d7d1-4371-ad86-35afed48a6f2 kind=\"EndpointSlice\" virtual=false\nI0802 09:30:58.842500       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-8312/agnhost-primary-gbktd\" objectUID=94339fe9-d7d1-4371-ad86-35afed48a6f2 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:30:58.927537       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-8312/agnhost-primary\" need=1 creating=1\nI0802 09:30:58.945725       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-8312/agnhost-primary-gfgrf\" objectUID=fa631324-7505-424b-b639-bde4ab3afe75 kind=\"Pod\" virtual=false\nE0802 09:31:00.000460       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0802 09:31:00.198909       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0802 09:31:01.255588       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-1976/default: secrets \"default-token-66dp5\" is forbidden: unable to create new content in namespace kubectl-1976 because it is being terminated\nI0802 09:31:01.272714       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-1976/agnhost-primary-88ddx\" objectUID=6e4cad44-c639-4919-9f1a-4396a7aa816e kind=\"Pod\" virtual=false\nI0802 09:31:01.275728       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-1976/agnhost-primary-88ddx\" objectUID=6e4cad44-c639-4919-9f1a-4396a7aa816e kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:01.416464       1 namespace_controller.go:185] Namespace has been deleted provisioning-552\nI0802 09:31:02.920622       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9194/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0802 09:31:02.929584       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-9194/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0802 09:31:03.014141       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:03.151104       1 pv_controller.go:864] volume \"local-pv8vmx8\" entered phase \"Available\"\nI0802 09:31:03.262085       1 route_controller.go:294] set node ip-172-20-35-97.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:03.262087       1 route_controller.go:294] set node ip-172-20-47-13.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:03.262098       1 route_controller.go:294] set node ip-172-20-43-68.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:03.262108       1 route_controller.go:294] set node ip-172-20-48-162.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:03.262118       1 route_controller.go:294] set node ip-172-20-56-163.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:03.337875       1 pv_controller.go:915] claim \"persistent-local-volumes-test-2637/pvc-qdfkl\" bound to volume \"local-pv8vmx8\"\nI0802 09:31:03.345743       1 pv_controller.go:864] volume \"local-pv8vmx8\" entered phase \"Bound\"\nI0802 09:31:03.345937       1 pv_controller.go:967] volume \"local-pv8vmx8\" bound to claim \"persistent-local-volumes-test-2637/pvc-qdfkl\"\nI0802 09:31:03.351784       1 pv_controller.go:808] claim \"persistent-local-volumes-test-2637/pvc-qdfkl\" entered phase \"Bound\"\nI0802 09:31:04.215502       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: concurrent-1627896300, status: Complete\"\nI0802 09:31:04.236775       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-1627896660\"\nI0802 09:31:04.247474       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent-1627896660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-1627896660-l2zhx\"\nI0802 09:31:04.261263       1 cronjob_controller.go:188] Unable to update status for cronjob-6503/concurrent (rv = 26450): Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0802 09:31:04.356552       1 namespace_controller.go:185] Namespace has been deleted volumemode-685-3249\nI0802 09:31:04.533213       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-8965/pod-93fe166f-cce6-4c26-9988-daa80d2ad018 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-rbm9n pvc- persistent-local-volumes-test-8965  8711bc0e-7ad3-40d8-8333-1908a1146dd1 26246 0 2021-08-02 09:30:49 +0000 UTC 2021-08-02 09:30:57 +0000 UTC 0xc001c82f48 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:30:49 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:30:49 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvqrfnd,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-8965,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:04.533329       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-8965/pvc-rbm9n because it is still being used\nI0802 09:31:04.539196       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-8965/pvc-rbm9n is unused\nI0802 09:31:04.548462       1 pv_controller.go:638] volume \"local-pvqrfnd\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:04.551766       1 pv_controller.go:864] volume \"local-pvqrfnd\" entered phase \"Released\"\nI0802 09:31:04.557819       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-8965/pvc-rbm9n\" was already processed\nE0802 09:31:04.578496       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-8965/default: secrets \"default-token-f65m6\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-8965 because it is being terminated\nI0802 09:31:04.624670       1 pvc_protection_controller.go:291] PVC fsgroupchangepolicy-7691/aws5nc47 is unused\nI0802 09:31:04.630953       1 pv_controller.go:638] volume \"pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56\" is released and reclaim policy \"Delete\" will be executed\nI0802 09:31:04.633836       1 pv_controller.go:864] volume \"pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56\" entered phase \"Released\"\nI0802 09:31:04.635998       1 pv_controller.go:1326] isVolumeReleased[pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56]: volume is released\nI0802 09:31:04.738423       1 pvc_protection_controller.go:291] PVC volume-1305/pvc-h2pwn is unused\nI0802 09:31:04.743870       1 pv_controller.go:638] volume \"local-xl5w5\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:04.745857       1 pv_controller.go:864] volume \"local-xl5w5\" entered phase \"Released\"\nI0802 09:31:04.766650       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0e92a6089ff4f894f: error deleting EBS volume \"vol-0e92a6089ff4f894f\" since volume is currently attached to \"i-0a44735e77bbb5a11\"\nE0802 09:31:04.766708       1 goroutinemap.go:150] Operation for \"delete-pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56[4c8c136a-5a72-4d8e-bb9b-add6aa491e02]\" failed. No retries permitted until 2021-08-02 09:31:05.266689581 +0000 UTC m=+347.211255510 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0e92a6089ff4f894f\\\" since volume is currently attached to \\\"i-0a44735e77bbb5a11\\\"\"\nI0802 09:31:04.766780       1 event.go:291] \"Event occurred\" object=\"pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0e92a6089ff4f894f\\\" since volume is currently attached to \\\"i-0a44735e77bbb5a11\\\"\"\nE0802 09:31:04.789209       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-4154/pvc-ftx5v: storageclass.storage.k8s.io \"provisioning-4154\" not found\nI0802 09:31:04.789426       1 event.go:291] \"Event occurred\" object=\"provisioning-4154/pvc-ftx5v\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4154\\\" not found\"\nI0802 09:31:04.934463       1 pv_controller_base.go:504] deletion of claim \"volume-1305/pvc-h2pwn\" was already processed\nI0802 09:31:04.982952       1 pv_controller.go:864] volume \"local-4bhkz\" entered phase \"Available\"\nE0802 09:31:05.656512       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-8893/default: secrets \"default-token-lrfkn\" is forbidden: unable to create new content in namespace volumemode-8893 because it is being terminated\nI0802 09:31:05.726465       1 stateful_set.go:419] StatefulSet has been deleted statefulset-9194/ss\nI0802 09:31:05.726777       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-9194/ss-7656bfbbb7\" objectUID=b82107b6-a1fd-4823-9995-5ce48bac4155 kind=\"ControllerRevision\" virtual=false\nI0802 09:31:05.729034       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-9194/ss-7656bfbbb7\" objectUID=b82107b6-a1fd-4823-9995-5ce48bac4155 kind=\"ControllerRevision\" propagationPolicy=Background\nE0802 09:31:06.033809       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-8594/default: secrets \"default-token-ztt7x\" is forbidden: unable to create new content in namespace configmap-8594 because it is being terminated\nI0802 09:31:06.516523       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent-1627896360\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0802 09:31:07.084367       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0e92a6089ff4f894f\") on node \"ip-172-20-47-13.ap-southeast-2.compute.internal\" \nI0802 09:31:07.093906       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0e92a6089ff4f894f\") on node \"ip-172-20-47-13.ap-southeast-2.compute.internal\" \nI0802 09:31:07.305321       1 namespace_controller.go:185] Namespace has been deleted volumemode-5208\nI0802 09:31:07.333017       1 namespace_controller.go:185] Namespace has been deleted projected-5425\nI0802 09:31:07.360855       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-attacher-sm7dd\" objectUID=5b84dbfe-59ff-4c56-94b8-5c61c2b56912 kind=\"EndpointSlice\" virtual=false\nI0802 09:31:07.371050       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-attacher-sm7dd\" objectUID=5b84dbfe-59ff-4c56-94b8-5c61c2b56912 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:31:07.561811       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-attacher-79c8cc4956\" objectUID=80bdbaa5-d73a-4e82-96b3-4e97d2fb7b9d kind=\"ControllerRevision\" virtual=false\nI0802 09:31:07.562030       1 stateful_set.go:419] StatefulSet has been deleted volumemode-5208-5908/csi-hostpath-attacher\nI0802 09:31:07.562062       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-attacher-0\" objectUID=47dfc3e1-ebc0-4d8d-afea-c6738f949359 kind=\"Pod\" virtual=false\nI0802 09:31:07.564108       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-attacher-0\" objectUID=47dfc3e1-ebc0-4d8d-afea-c6738f949359 kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:07.564244       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-attacher-79c8cc4956\" objectUID=80bdbaa5-d73a-4e82-96b3-4e97d2fb7b9d kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:31:07.944097       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpathplugin-cxvhz\" objectUID=321e216c-8792-42ae-9d14-3f1ec27ea461 kind=\"EndpointSlice\" virtual=false\nI0802 09:31:07.948375       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpathplugin-cxvhz\" objectUID=321e216c-8792-42ae-9d14-3f1ec27ea461 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:31:08.141415       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpathplugin-5598d484fb\" objectUID=42678f9d-f519-4016-9fbc-72b11e2d65c0 kind=\"ControllerRevision\" virtual=false\nI0802 09:31:08.141591       1 stateful_set.go:419] StatefulSet has been deleted volumemode-5208-5908/csi-hostpathplugin\nI0802 09:31:08.141791       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpathplugin-0\" objectUID=6639a657-c831-44b0-be29-0d5adea0cdb9 kind=\"Pod\" virtual=false\nI0802 09:31:08.143970       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpathplugin-0\" objectUID=6639a657-c831-44b0-be29-0d5adea0cdb9 kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:08.144115       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpathplugin-5598d484fb\" objectUID=42678f9d-f519-4016-9fbc-72b11e2d65c0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:31:08.332446       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-provisioner-7kh5h\" objectUID=3783c181-e4bd-496a-92fb-66303d7e92b7 kind=\"EndpointSlice\" virtual=false\nI0802 09:31:08.334670       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-provisioner-7kh5h\" objectUID=3783c181-e4bd-496a-92fb-66303d7e92b7 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:31:08.530401       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-provisioner-7dc5c7ffd\" objectUID=7d3c10f1-bfbe-43b6-8dc6-862957627c58 kind=\"ControllerRevision\" virtual=false\nI0802 09:31:08.530647       1 stateful_set.go:419] StatefulSet has been deleted volumemode-5208-5908/csi-hostpath-provisioner\nI0802 09:31:08.530683       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-provisioner-0\" objectUID=78a9e0b0-cb43-441b-b7f7-089b4e983845 kind=\"Pod\" virtual=false\nI0802 09:31:08.532649       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-provisioner-7dc5c7ffd\" objectUID=7d3c10f1-bfbe-43b6-8dc6-862957627c58 kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:31:08.533503       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-provisioner-0\" objectUID=78a9e0b0-cb43-441b-b7f7-089b4e983845 kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:08.721376       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-resizer-p52gq\" objectUID=00936c06-29a7-4a55-91a7-c90b4bfb99a0 kind=\"EndpointSlice\" virtual=false\nI0802 09:31:08.726442       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-resizer-p52gq\" objectUID=00936c06-29a7-4a55-91a7-c90b4bfb99a0 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:31:08.739772       1 pv_controller.go:915] claim \"volume-3615/pvc-v99wq\" bound to volume \"local-ptnfw\"\nI0802 09:31:08.743361       1 pv_controller.go:1326] isVolumeReleased[pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56]: volume is released\nI0802 09:31:08.745152       1 pv_controller.go:1326] isVolumeReleased[pvc-ce4f8c4d-01b2-4cdd-9029-023453a90880]: volume is released\nI0802 09:31:08.746929       1 pv_controller.go:864] volume \"local-ptnfw\" entered phase \"Bound\"\nI0802 09:31:08.747061       1 pv_controller.go:967] volume \"local-ptnfw\" bound to claim \"volume-3615/pvc-v99wq\"\nI0802 09:31:08.752429       1 pv_controller.go:808] claim \"volume-3615/pvc-v99wq\" entered phase \"Bound\"\nI0802 09:31:08.752932       1 pv_controller.go:915] claim \"provisioning-4154/pvc-ftx5v\" bound to volume \"local-4bhkz\"\nI0802 09:31:08.758829       1 pv_controller.go:864] volume \"local-4bhkz\" entered phase \"Bound\"\nI0802 09:31:08.758955       1 pv_controller.go:967] volume \"local-4bhkz\" bound to claim \"provisioning-4154/pvc-ftx5v\"\nI0802 09:31:08.763665       1 pv_controller.go:808] claim \"provisioning-4154/pvc-ftx5v\" entered phase \"Bound\"\nI0802 09:31:08.907148       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0e92a6089ff4f894f: error deleting EBS volume \"vol-0e92a6089ff4f894f\" since volume is currently attached to \"i-0a44735e77bbb5a11\"\nE0802 09:31:08.907316       1 goroutinemap.go:150] Operation for \"delete-pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56[4c8c136a-5a72-4d8e-bb9b-add6aa491e02]\" failed. No retries permitted until 2021-08-02 09:31:09.90729577 +0000 UTC m=+351.851861696 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0e92a6089ff4f894f\\\" since volume is currently attached to \\\"i-0a44735e77bbb5a11\\\"\"\nI0802 09:31:08.907665       1 event.go:291] \"Event occurred\" object=\"pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0e92a6089ff4f894f\\\" since volume is currently attached to \\\"i-0a44735e77bbb5a11\\\"\"\nI0802 09:31:08.921634       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-resizer-5dd74b54cc\" objectUID=87fc7a0e-7085-410a-bbcb-d7116f04bc5e kind=\"ControllerRevision\" virtual=false\nI0802 09:31:08.921773       1 stateful_set.go:419] StatefulSet has been deleted volumemode-5208-5908/csi-hostpath-resizer\nI0802 09:31:08.921814       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-resizer-0\" objectUID=91ae1fb4-b477-449d-aa72-2b5ed1559e5a kind=\"Pod\" virtual=false\nI0802 09:31:08.923951       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-resizer-5dd74b54cc\" objectUID=87fc7a0e-7085-410a-bbcb-d7116f04bc5e kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:31:08.924307       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-resizer-0\" objectUID=91ae1fb4-b477-449d-aa72-2b5ed1559e5a kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:08.962120       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-2a/vol-0cac793ed8569f2bc\nI0802 09:31:08.962146       1 pv_controller.go:1421] volume \"pvc-ce4f8c4d-01b2-4cdd-9029-023453a90880\" deleted\nI0802 09:31:08.976340       1 pv_controller_base.go:504] deletion of claim \"provisioning-9126/awsnxq77\" was already processed\nI0802 09:31:09.112370       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-snapshotter-8g2wr\" objectUID=d99e8b45-bf51-4a6b-b25b-2fcc076e88bc kind=\"EndpointSlice\" virtual=false\nI0802 09:31:09.116163       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-snapshotter-8g2wr\" objectUID=d99e8b45-bf51-4a6b-b25b-2fcc076e88bc kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:31:09.311147       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-snapshotter-745c7785f7\" objectUID=180bde48-5f1a-495c-beda-d920ac2e2af9 kind=\"ControllerRevision\" virtual=false\nI0802 09:31:09.311384       1 garbagecollector.go:471] \"Processing object\" object=\"volumemode-5208-5908/csi-hostpath-snapshotter-0\" objectUID=a850b48b-cdaf-4d2c-8975-932b381f34dd kind=\"Pod\" virtual=false\nI0802 09:31:09.311306       1 stateful_set.go:419] StatefulSet has been deleted volumemode-5208-5908/csi-hostpath-snapshotter\nI0802 09:31:09.313735       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-snapshotter-0\" objectUID=a850b48b-cdaf-4d2c-8975-932b381f34dd kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:09.313851       1 garbagecollector.go:580] \"Deleting object\" object=\"volumemode-5208-5908/csi-hostpath-snapshotter-745c7785f7\" objectUID=180bde48-5f1a-495c-beda-d920ac2e2af9 kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:31:09.473183       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7011\nI0802 09:31:09.638787       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8965\nI0802 09:31:09.961277       1 namespace_controller.go:185] Namespace has been deleted downward-api-1292\nI0802 09:31:10.756813       1 namespace_controller.go:185] Namespace has been deleted volumemode-8893\nI0802 09:31:11.084812       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-7011-9023/csi-mockplugin\nI0802 09:31:11.085012       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-0\" objectUID=55f65f93-285c-4729-a618-75635687abc4 kind=\"Pod\" virtual=false\nI0802 09:31:11.085389       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-d7c48c8fd\" objectUID=9adf4089-b398-4fd5-b493-39565d04aa46 kind=\"ControllerRevision\" virtual=false\nI0802 09:31:11.087643       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-0\" objectUID=55f65f93-285c-4729-a618-75635687abc4 kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:11.087915       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-d7c48c8fd\" objectUID=9adf4089-b398-4fd5-b493-39565d04aa46 kind=\"ControllerRevision\" propagationPolicy=Background\nE0802 09:31:11.131768       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nE0802 09:31:11.211263       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nI0802 09:31:11.275112       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-attacher-589d75fbf9\" objectUID=298f236c-7a03-4590-b472-d22a97bb17c1 kind=\"ControllerRevision\" virtual=false\nI0802 09:31:11.275314       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-7011-9023/csi-mockplugin-attacher\nI0802 09:31:11.275373       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-attacher-0\" objectUID=5a0e9e06-1d83-44a1-b246-fcbd137403c4 kind=\"Pod\" virtual=false\nI0802 09:31:11.277314       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-attacher-0\" objectUID=5a0e9e06-1d83-44a1-b246-fcbd137403c4 kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:11.277508       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-attacher-589d75fbf9\" objectUID=298f236c-7a03-4590-b472-d22a97bb17c1 kind=\"ControllerRevision\" propagationPolicy=Background\nE0802 09:31:11.303506       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nE0802 09:31:11.415080       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nI0802 09:31:11.468499       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-resizer-7957989566\" objectUID=b67748ad-87cb-4c89-823c-2bc594b5b25a kind=\"ControllerRevision\" virtual=false\nI0802 09:31:11.468617       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-7011-9023/csi-mockplugin-resizer\nI0802 09:31:11.468670       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-resizer-0\" objectUID=9108cf85-5258-45d3-9c3a-d46d9b8836bd kind=\"Pod\" virtual=false\nI0802 09:31:11.471174       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-resizer-0\" objectUID=9108cf85-5258-45d3-9c3a-d46d9b8836bd kind=\"Pod\" propagationPolicy=Background\nI0802 09:31:11.471934       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7011-9023/csi-mockplugin-resizer-7957989566\" objectUID=b67748ad-87cb-4c89-823c-2bc594b5b25a kind=\"ControllerRevision\" propagationPolicy=Background\nE0802 09:31:11.580226       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nE0802 09:31:11.752120       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nE0802 09:31:11.792953       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-695/pvc-5rp4r: storageclass.storage.k8s.io \"provisioning-695\" not found\nI0802 09:31:11.793332       1 event.go:291] \"Event occurred\" object=\"provisioning-695/pvc-5rp4r\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-695\\\" not found\"\nI0802 09:31:11.991350       1 pv_controller.go:864] volume \"local-khr6k\" entered phase \"Available\"\nE0802 09:31:11.999494       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nI0802 09:31:12.177911       1 namespace_controller.go:185] Namespace has been deleted secrets-4728\nE0802 09:31:12.403052       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nI0802 09:31:12.481989       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0e92a6089ff4f894f\") on node \"ip-172-20-47-13.ap-southeast-2.compute.internal\" \nE0802 09:31:13.145289       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nI0802 09:31:13.243068       1 route_controller.go:294] set node ip-172-20-35-97.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:13.243078       1 route_controller.go:294] set node ip-172-20-47-13.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:13.243095       1 route_controller.go:294] set node ip-172-20-48-162.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:13.243104       1 route_controller.go:294] set node ip-172-20-56-163.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:13.243115       1 route_controller.go:294] set node ip-172-20-43-68.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:13.287456       1 namespace_controller.go:185] Namespace has been deleted nettest-4391\nI0802 09:31:13.833034       1 pvc_protection_controller.go:291] PVC volume-3615/pvc-v99wq is unused\nI0802 09:31:13.838768       1 pv_controller.go:638] volume \"local-ptnfw\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:13.841401       1 pv_controller.go:864] volume \"local-ptnfw\" entered phase \"Released\"\nI0802 09:31:14.024851       1 pv_controller_base.go:504] deletion of claim \"volume-3615/pvc-v99wq\" was already processed\nI0802 09:31:14.265643       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: concurrent-1627896360, status: Complete\"\nE0802 09:31:14.501843       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nI0802 09:31:16.570855       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-2637/pod-318cf6ac-a615-445a-b0df-9d37e541e60c uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-qdfkl pvc- persistent-local-volumes-test-2637  14ba68d4-27e3-4b21-8e7b-53776859cd74 26882 0 2021-08-02 09:31:03 +0000 UTC 2021-08-02 09:31:16 +0000 UTC 0xc0025a3fc8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vmx8,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-2637,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:16.570955       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-2637/pvc-qdfkl because it is still being used\nI0802 09:31:16.693505       1 namespace_controller.go:185] Namespace has been deleted statefulset-9194\nE0802 09:31:17.143862       1 namespace_controller.go:162] deletion of namespace configmap-8594 failed: unexpected items still remain in namespace: configmap-8594 for gvr: /v1, Resource=pods\nI0802 09:31:17.616151       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-2637/pod-318cf6ac-a615-445a-b0df-9d37e541e60c uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-qdfkl pvc- persistent-local-volumes-test-2637  14ba68d4-27e3-4b21-8e7b-53776859cd74 26882 0 2021-08-02 09:31:03 +0000 UTC 2021-08-02 09:31:16 +0000 UTC 0xc0025a3fc8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vmx8,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-2637,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:17.616250       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-2637/pvc-qdfkl because it is still being used\nI0802 09:31:17.618123       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-2637/pod-146c0b0c-03af-44ee-8b92-40cfea581ab9 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-qdfkl pvc- persistent-local-volumes-test-2637  14ba68d4-27e3-4b21-8e7b-53776859cd74 26882 0 2021-08-02 09:31:03 +0000 UTC 2021-08-02 09:31:16 +0000 UTC 0xc0025a3fc8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vmx8,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-2637,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:17.618175       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-2637/pvc-qdfkl because it is still being used\nE0802 09:31:17.950504       1 tokens_controller.go:262] error synchronizing serviceaccount pod-network-test-3732/default: secrets \"default-token-p5nkc\" is forbidden: unable to create new content in namespace pod-network-test-3732 because it is being terminated\nI0802 09:31:18.122864       1 namespace_controller.go:185] Namespace has been deleted projected-5098\nI0802 09:31:18.239472       1 pvc_protection_controller.go:291] PVC volume-5107/pvc-6q9hd is unused\nI0802 09:31:18.244857       1 pv_controller.go:638] volume \"local-wljkd\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:18.247363       1 pv_controller.go:864] volume \"local-wljkd\" entered phase \"Released\"\nI0802 09:31:18.427436       1 pvc_protection_controller.go:291] PVC provisioning-4154/pvc-ftx5v is unused\nI0802 09:31:18.434963       1 pv_controller.go:638] volume \"local-4bhkz\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:18.438668       1 pv_controller.go:864] volume \"local-4bhkz\" entered phase \"Released\"\nI0802 09:31:18.438765       1 pv_controller_base.go:504] deletion of claim \"volume-5107/pvc-6q9hd\" was already processed\nI0802 09:31:18.625410       1 pv_controller_base.go:504] deletion of claim \"provisioning-4154/pvc-ftx5v\" was already processed\nI0802 09:31:19.019492       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-2637/pod-146c0b0c-03af-44ee-8b92-40cfea581ab9 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-qdfkl pvc- persistent-local-volumes-test-2637  14ba68d4-27e3-4b21-8e7b-53776859cd74 26882 0 2021-08-02 09:31:03 +0000 UTC 2021-08-02 09:31:16 +0000 UTC 0xc0025a3fc8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vmx8,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-2637,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:19.019565       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-2637/pvc-qdfkl because it is still being used\nI0802 09:31:19.216415       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-2637/pod-146c0b0c-03af-44ee-8b92-40cfea581ab9 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-qdfkl pvc- persistent-local-volumes-test-2637  14ba68d4-27e3-4b21-8e7b-53776859cd74 26882 0 2021-08-02 09:31:03 +0000 UTC 2021-08-02 09:31:16 +0000 UTC 0xc0025a3fc8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:31:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vmx8,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-2637,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:19.216667       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-2637/pvc-qdfkl because it is still being used\nI0802 09:31:19.222134       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-2637/pvc-qdfkl is unused\nI0802 09:31:19.228133       1 pv_controller.go:638] volume \"local-pv8vmx8\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:19.231025       1 pv_controller.go:864] volume \"local-pv8vmx8\" entered phase \"Released\"\nI0802 09:31:19.234643       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-2637/pvc-qdfkl\" was already processed\nI0802 09:31:19.423454       1 aws.go:1819] Found instances in zones map[ap-southeast-2a:{}]\nI0802 09:31:19.750151       1 namespace_controller.go:185] Namespace has been deleted volume-1305\nI0802 09:31:20.061184       1 namespace_controller.go:185] Namespace has been deleted provisioning-9126\nI0802 09:31:20.138132       1 namespace_controller.go:185] Namespace has been deleted volumemode-5208-5908\nE0802 09:31:20.459571       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1692/default: secrets \"default-token-jhpvj\" is forbidden: unable to create new content in namespace provisioning-1692 because it is being terminated\nE0802 09:31:20.544895       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2564/default: secrets \"default-token-xgmhr\" is forbidden: unable to create new content in namespace provisioning-2564 because it is being terminated\nE0802 09:31:21.487690       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-539/default: secrets \"default-token-8sdpm\" is forbidden: unable to create new content in namespace provisioning-539 because it is being terminated\nI0802 09:31:22.030055       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5419/test-cleanup-controller\" need=1 creating=1\nI0802 09:31:22.034683       1 event.go:291] \"Event occurred\" object=\"deployment-5419/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-controller-pgzbj\"\nE0802 09:31:22.291610       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-8941/pvc-n54mv: storageclass.storage.k8s.io \"provisioning-8941\" not found\nI0802 09:31:22.292008       1 event.go:291] \"Event occurred\" object=\"provisioning-8941/pvc-n54mv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8941\\\" not found\"\nE0802 09:31:22.360735       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:22.495492       1 pv_controller.go:864] volume \"local-77h9t\" entered phase \"Available\"\nE0802 09:31:22.532352       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-805/default: secrets \"default-token-qqhwg\" is forbidden: unable to create new content in namespace downward-api-805 because it is being terminated\nE0802 09:31:22.646314       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:23.247591       1 route_controller.go:294] set node ip-172-20-43-68.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:23.247616       1 route_controller.go:294] set node ip-172-20-56-163.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:23.247625       1 route_controller.go:294] set node ip-172-20-35-97.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:23.247636       1 route_controller.go:294] set node ip-172-20-47-13.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:23.247603       1 route_controller.go:294] set node ip-172-20-48-162.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nE0802 09:31:23.611397       1 tokens_controller.go:262] error synchronizing serviceaccount metrics-grabber-4729/default: secrets \"default-token-dzxj2\" is forbidden: unable to create new content in namespace metrics-grabber-4729 because it is being terminated\nI0802 09:31:23.739798       1 pv_controller.go:915] claim \"provisioning-8941/pvc-n54mv\" bound to volume \"local-77h9t\"\nI0802 09:31:23.745434       1 pv_controller.go:1326] isVolumeReleased[pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56]: volume is released\nI0802 09:31:23.746117       1 pv_controller.go:864] volume \"local-77h9t\" entered phase \"Bound\"\nI0802 09:31:23.746245       1 pv_controller.go:967] volume \"local-77h9t\" bound to claim \"provisioning-8941/pvc-n54mv\"\nI0802 09:31:23.752658       1 pv_controller.go:808] claim \"provisioning-8941/pvc-n54mv\" entered phase \"Bound\"\nI0802 09:31:23.753110       1 pv_controller.go:915] claim \"provisioning-695/pvc-5rp4r\" bound to volume \"local-khr6k\"\nI0802 09:31:23.758222       1 pv_controller.go:864] volume \"local-khr6k\" entered phase \"Bound\"\nI0802 09:31:23.758252       1 pv_controller.go:967] volume \"local-khr6k\" bound to claim \"provisioning-695/pvc-5rp4r\"\nI0802 09:31:23.762892       1 pv_controller.go:808] claim \"provisioning-695/pvc-5rp4r\" entered phase \"Bound\"\nI0802 09:31:23.933794       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-2a/vol-0e92a6089ff4f894f\nI0802 09:31:23.933824       1 pv_controller.go:1421] volume \"pvc-d90f7694-562d-47f3-8152-a15d2dbd1b56\" deleted\nI0802 09:31:23.940948       1 pv_controller_base.go:504] deletion of claim \"fsgroupchangepolicy-7691/aws5nc47\" was already processed\nI0802 09:31:24.817178       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-southeast-2a/vol-0bc3f5c1db447bf80\nI0802 09:31:24.872047       1 pv_controller.go:1652] volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" provisioned for claim \"topology-5283/pvc-mj4xb\"\nI0802 09:31:24.872835       1 event.go:291] \"Event occurred\" object=\"topology-5283/pvc-mj4xb\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd using kubernetes.io/aws-ebs\"\nI0802 09:31:24.877910       1 pv_controller.go:864] volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" entered phase \"Bound\"\nI0802 09:31:24.878041       1 pv_controller.go:967] volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" bound to claim \"topology-5283/pvc-mj4xb\"\nI0802 09:31:24.892863       1 pv_controller.go:808] claim \"topology-5283/pvc-mj4xb\" entered phase \"Bound\"\nE0802 09:31:24.925686       1 tokens_controller.go:262] error synchronizing serviceaccount volume-3615/default: secrets \"default-token-w8ssx\" is forbidden: unable to create new content in namespace volume-3615 because it is being terminated\nI0802 09:31:25.130766       1 namespace_controller.go:185] Namespace has been deleted kubectl-8312\nE0802 09:31:25.221428       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-11/pvc-5brt6: storageclass.storage.k8s.io \"provisioning-11\" not found\nI0802 09:31:25.221639       1 event.go:291] \"Event occurred\" object=\"provisioning-11/pvc-5brt6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-11\\\" not found\"\nI0802 09:31:25.414605       1 pv_controller.go:864] volume \"local-swtg4\" entered phase \"Available\"\nI0802 09:31:25.655234       1 namespace_controller.go:185] Namespace has been deleted provisioning-1692\nI0802 09:31:25.706807       1 namespace_controller.go:185] Namespace has been deleted provisioning-2564\nE0802 09:31:25.911727       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4154/default: secrets \"default-token-vz8gc\" is forbidden: unable to create new content in namespace provisioning-4154 because it is being terminated\nI0802 09:31:26.585694       1 namespace_controller.go:185] Namespace has been deleted provisioning-539\nI0802 09:31:26.626587       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0bc3f5c1db447bf80\") from node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:31:26.677477       1 aws.go:2014] Assigned mount device cw -> volume vol-0bc3f5c1db447bf80\nI0802 09:31:26.955703       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7011-9023\nI0802 09:31:26.967780       1 namespace_controller.go:185] Namespace has been deleted volumemode-4832\nI0802 09:31:27.061447       1 aws.go:2427] AttachVolume volume=\"vol-0bc3f5c1db447bf80\" instance=\"i-002cc620c967da679\" request returned {\n  AttachTime: 2021-08-02 09:31:27.051 +0000 UTC,\n  Device: \"/dev/xvdcw\",\n  InstanceId: \"i-002cc620c967da679\",\n  State: \"attaching\",\n  VolumeId: \"vol-0bc3f5c1db447bf80\"\n}\nI0802 09:31:27.133840       1 namespace_controller.go:185] Namespace has been deleted services-3093\nI0802 09:31:27.179120       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5419/test-cleanup-deployment-685c4f8568\" need=1 creating=1\nI0802 09:31:27.179521       1 event.go:291] \"Event occurred\" object=\"deployment-5419/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-cleanup-deployment-685c4f8568 to 1\"\nI0802 09:31:27.184201       1 event.go:291] \"Event occurred\" object=\"deployment-5419/test-cleanup-deployment-685c4f8568\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-deployment-685c4f8568-sfvcw\"\nI0802 09:31:27.192163       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5419/test-cleanup-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-cleanup-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0802 09:31:27.280203       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-5488/pvc-h2zwn: storageclass.storage.k8s.io \"provisioning-5488\" not found\nI0802 09:31:27.281295       1 event.go:291] \"Event occurred\" object=\"provisioning-5488/pvc-h2zwn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5488\\\" not found\"\nI0802 09:31:27.360916       1 namespace_controller.go:185] Namespace has been deleted configmap-8594\nI0802 09:31:27.472203       1 pv_controller.go:864] volume \"local-w5xbx\" entered phase \"Available\"\nI0802 09:31:27.598990       1 namespace_controller.go:185] Namespace has been deleted kubectl-1976\nI0802 09:31:27.629435       1 namespace_controller.go:185] Namespace has been deleted downward-api-805\nI0802 09:31:27.856139       1 pv_controller.go:864] volume \"nfs-ss4t7\" entered phase \"Available\"\nE0802 09:31:27.979293       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:28.036518       1 namespace_controller.go:185] Namespace has been deleted disruption-900\nI0802 09:31:28.049654       1 pv_controller.go:915] claim \"pv-5237/pvc-89z4l\" bound to volume \"nfs-ss4t7\"\nI0802 09:31:28.071527       1 pv_controller.go:864] volume \"nfs-ss4t7\" entered phase \"Bound\"\nI0802 09:31:28.071643       1 pv_controller.go:967] volume \"nfs-ss4t7\" bound to claim \"pv-5237/pvc-89z4l\"\nI0802 09:31:28.083582       1 pv_controller.go:808] claim \"pv-5237/pvc-89z4l\" entered phase \"Bound\"\nI0802 09:31:28.680708       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-4729\nI0802 09:31:28.883134       1 event.go:291] \"Event occurred\" object=\"deployment-5419/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-cleanup-controller to 0\"\nI0802 09:31:28.883494       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-5419/test-cleanup-controller\" need=0 deleting=1\nI0802 09:31:28.883530       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-5419/test-cleanup-controller\" relatedReplicaSets=[test-cleanup-controller test-cleanup-deployment-685c4f8568]\nI0802 09:31:28.883603       1 controller_utils.go:604] \"Deleting pod\" controller=\"test-cleanup-controller\" pod=\"deployment-5419/test-cleanup-controller-pgzbj\"\nI0802 09:31:28.894266       1 event.go:291] \"Event occurred\" object=\"deployment-5419/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-cleanup-controller-pgzbj\"\nI0802 09:31:29.163065       1 aws.go:2037] Releasing in-process attachment entry: cw -> volume vol-0bc3f5c1db447bf80\nI0802 09:31:29.163120       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0bc3f5c1db447bf80\") from node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:31:29.163295       1 event.go:291] \"Event occurred\" object=\"topology-5283/pod-f5bbbdd6-fc11-4f57-956a-0f108068eb31\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\\\" \"\nI0802 09:31:29.361358       1 namespace_controller.go:185] Namespace has been deleted volume-6140\nI0802 09:31:29.456407       1 pvc_protection_controller.go:291] PVC provisioning-8941/pvc-n54mv is unused\nI0802 09:31:29.456768       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-2872, name: inline-volume-tester-j4q7c, uid: a30ca9cd-7078-442a-ae8a-bd3ac0091c15] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0802 09:31:29.456912       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872/inline-volume-tester-j4q7c\" objectUID=a30ca9cd-7078-442a-ae8a-bd3ac0091c15 kind=\"Pod\" virtual=false\nI0802 09:31:29.471521       1 pv_controller.go:638] volume \"local-77h9t\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:29.473935       1 pv_controller.go:864] volume \"local-77h9t\" entered phase \"Released\"\nI0802 09:31:29.649102       1 pv_controller_base.go:504] deletion of claim \"provisioning-8941/pvc-n54mv\" was already processed\nI0802 09:31:29.992131       1 pv_controller.go:864] volume \"local-pvp8l7z\" entered phase \"Available\"\nI0802 09:31:30.006387       1 namespace_controller.go:185] Namespace has been deleted volume-3615\nI0802 09:31:30.014169       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-2872, name: inline-volume-tester-j4q7c, uid: a30ca9cd-7078-442a-ae8a-bd3ac0091c15]\nI0802 09:31:30.177247       1 pv_controller.go:915] claim \"persistent-local-volumes-test-2788/pvc-d8927\" bound to volume \"local-pvp8l7z\"\nI0802 09:31:30.191487       1 pv_controller.go:864] volume \"local-pvp8l7z\" entered phase \"Bound\"\nI0802 09:31:30.191519       1 pv_controller.go:967] volume \"local-pvp8l7z\" bound to claim \"persistent-local-volumes-test-2788/pvc-d8927\"\nI0802 09:31:30.212484       1 pv_controller.go:808] claim \"persistent-local-volumes-test-2788/pvc-d8927\" entered phase \"Bound\"\nI0802 09:31:31.028848       1 namespace_controller.go:185] Namespace has been deleted provisioning-4154\nI0802 09:31:31.127377       1 namespace_controller.go:185] Namespace has been deleted emptydir-5956\nE0802 09:31:31.278787       1 tokens_controller.go:262] error synchronizing serviceaccount fsgroupchangepolicy-7691/default: secrets \"default-token-lh4qx\" is forbidden: unable to create new content in namespace fsgroupchangepolicy-7691 because it is being terminated\nE0802 09:31:31.380618       1 pv_controller.go:1437] error finding provisioning plugin for claim volumemode-2636/pvc-l54r9: storageclass.storage.k8s.io \"volumemode-2636\" not found\nI0802 09:31:31.380879       1 event.go:291] \"Event occurred\" object=\"volumemode-2636/pvc-l54r9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-2636\\\" not found\"\nI0802 09:31:31.397659       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [crd-publish-openapi-test-foo.example.com/v1, Resource=e2e-test-crd-publish-openapi-773-crds crd-publish-openapi-test-waldo.example.com/v1beta1, Resource=e2e-test-crd-publish-openapi-411-crds resourcequota.example.com/v1, Resource=e2e-test-resourcequota-6631-crds], removed: []\nI0802 09:31:31.397792       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-773-crds.crd-publish-openapi-test-foo.example.com\nI0802 09:31:31.397857       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-resourcequota-6631-crds.resourcequota.example.com\nI0802 09:31:31.397887       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-411-crds.crd-publish-openapi-test-waldo.example.com\nI0802 09:31:31.397956       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0802 09:31:31.398121       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (15h5m20.991945973s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0802 09:31:31.398392       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (15h5m20.991945973s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0802 09:31:31.398637       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (15h5m20.991945973s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nE0802 09:31:31.475618       1 tokens_controller.go:262] error synchronizing serviceaccount svcaccounts-1580/default: secrets \"default-token-5zgg7\" is forbidden: unable to create new content in namespace svcaccounts-1580 because it is being terminated\nI0802 09:31:31.498847       1 shared_informer.go:247] Caches are synced for resource quota \nI0802 09:31:31.498866       1 resource_quota_controller.go:454] synced quota controller\nI0802 09:31:31.519334       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2637\nI0802 09:31:31.568180       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2168-9849/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0802 09:31:31.583038       1 pv_controller.go:864] volume \"local-hc7gp\" entered phase \"Available\"\nI0802 09:31:31.812328       1 pv_controller.go:864] volume \"local-pvxnqm7\" entered phase \"Available\"\nI0802 09:31:31.953971       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2168-9849/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI0802 09:31:31.999933       1 pv_controller.go:915] claim \"persistent-local-volumes-test-154/pvc-7d55s\" bound to volume \"local-pvxnqm7\"\nI0802 09:31:32.005758       1 pv_controller.go:864] volume \"local-pvxnqm7\" entered phase \"Bound\"\nI0802 09:31:32.005948       1 pv_controller.go:967] volume \"local-pvxnqm7\" bound to claim \"persistent-local-volumes-test-154/pvc-7d55s\"\nI0802 09:31:32.013795       1 pv_controller.go:808] claim \"persistent-local-volumes-test-154/pvc-7d55s\" entered phase \"Bound\"\nI0802 09:31:32.015103       1 pvc_protection_controller.go:291] PVC pv-5237/pvc-89z4l is unused\nI0802 09:31:32.019886       1 pv_controller.go:638] volume \"nfs-ss4t7\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:32.022403       1 pv_controller.go:864] volume \"nfs-ss4t7\" entered phase \"Released\"\nI0802 09:31:32.039032       1 pvc_protection_controller.go:291] PVC provisioning-695/pvc-5rp4r is unused\nI0802 09:31:32.045222       1 pv_controller.go:638] volume \"local-khr6k\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:32.047386       1 pv_controller.go:864] volume \"local-khr6k\" entered phase \"Released\"\nI0802 09:31:32.233911       1 pv_controller_base.go:504] deletion of claim \"provisioning-695/pvc-5rp4r\" was already processed\nI0802 09:31:32.782417       1 pv_controller_base.go:504] deletion of claim \"pv-5237/pvc-89z4l\" was already processed\nI0802 09:31:32.885631       1 event.go:291] \"Event occurred\" object=\"volume-6111/aws97kgv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0802 09:31:32.952682       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-154/pvc-7d55s is unused\nI0802 09:31:32.957503       1 pv_controller.go:638] volume \"local-pvxnqm7\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:32.960056       1 pv_controller.go:864] volume \"local-pvxnqm7\" entered phase \"Released\"\nI0802 09:31:33.145341       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-154/pvc-7d55s\" was already processed\nI0802 09:31:33.240602       1 route_controller.go:294] set node ip-172-20-35-97.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:33.240610       1 route_controller.go:294] set node ip-172-20-47-13.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:33.240623       1 route_controller.go:294] set node ip-172-20-43-68.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:33.240634       1 route_controller.go:294] set node ip-172-20-48-162.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:33.240646       1 route_controller.go:294] set node ip-172-20-56-163.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:33.667224       1 namespace_controller.go:185] Namespace has been deleted volume-5107\nI0802 09:31:34.290051       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-468/quota-for-e2e-test-resourcequota-6631-crds\nI0802 09:31:35.342091       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd-publish-openapi-test-foo.example.com/v1, Resource=e2e-test-crd-publish-openapi-773-crds crd-publish-openapi-test-waldo.example.com/v1beta1, Resource=e2e-test-crd-publish-openapi-411-crds resourcequota.example.com/v1, Resource=e2e-test-resourcequota-6631-crds], removed: []\nI0802 09:31:35.943831       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0802 09:31:35.944020       1 shared_informer.go:247] Caches are synced for garbage collector \nI0802 09:31:35.944033       1 garbagecollector.go:254] synced garbage collector\nI0802 09:31:36.012971       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5419/test-cleanup-deployment-685c4f8568\" need=1 creating=1\nI0802 09:31:36.041242       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5419/test-cleanup-deployment-685c4f8568\" objectUID=7c62fc5b-7a1d-4cc2-8d10-f99b89100b9a kind=\"ReplicaSet\" virtual=false\nI0802 09:31:36.043664       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5419/test-cleanup-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-cleanup-deployment\\\": StorageError: invalid object, Code: 4, Key: /registry/deployments/deployment-5419/test-cleanup-deployment, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 684c98cb-1bb7-4698-9646-17dcbece5031, UID in object meta: \"\nI0802 09:31:36.043841       1 deployment_controller.go:581] Deployment deployment-5419/test-cleanup-deployment has been deleted\nI0802 09:31:36.047823       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5419/test-cleanup-deployment-685c4f8568\" objectUID=7c62fc5b-7a1d-4cc2-8d10-f99b89100b9a kind=\"ReplicaSet\" propagationPolicy=Background\nI0802 09:31:36.053850       1 deployment_controller.go:581] Deployment deployment-5419/test-cleanup-deployment has been deleted\nI0802 09:31:36.130630       1 pvc_protection_controller.go:291] PVC volume-5525/awssdqb4 is unused\nI0802 09:31:36.136771       1 pv_controller.go:638] volume \"pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa\" is released and reclaim policy \"Delete\" will be executed\nI0802 09:31:36.139682       1 pv_controller.go:864] volume \"pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa\" entered phase \"Released\"\nI0802 09:31:36.141374       1 pv_controller.go:1326] isVolumeReleased[pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa]: volume is released\nI0802 09:31:36.308178       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-03c43059308180a28: error deleting EBS volume \"vol-03c43059308180a28\" since volume is currently attached to \"i-0a44735e77bbb5a11\"\nE0802 09:31:36.308355       1 goroutinemap.go:150] Operation for \"delete-pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa[b68f4c5f-e77c-4cff-8a5d-9155ce59a6ba]\" failed. No retries permitted until 2021-08-02 09:31:36.80833612 +0000 UTC m=+378.752902050 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-03c43059308180a28\\\" since volume is currently attached to \\\"i-0a44735e77bbb5a11\\\"\"\nI0802 09:31:36.308448       1 event.go:291] \"Event occurred\" object=\"pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-03c43059308180a28\\\" since volume is currently attached to \\\"i-0a44735e77bbb5a11\\\"\"\nI0802 09:31:36.336212       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-7691\nI0802 09:31:36.522013       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-1580\nE0802 09:31:36.592955       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8941/default: secrets \"default-token-xxsl4\" is forbidden: unable to create new content in namespace provisioning-8941 because it is being terminated\nI0802 09:31:36.788505       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-2788/pod-0a563f57-3a19-475e-b10d-d5a5d6bbf372 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-d8927 pvc- persistent-local-volumes-test-2788  cf06ca51-78e4-462e-aac1-91dd22dc8e41 27633 0 2021-08-02 09:31:30 +0000 UTC 2021-08-02 09:31:36 +0000 UTC 0xc000c9b478 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:31:30 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:31:30 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvp8l7z,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-2788,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:36.788989       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-2788/pvc-d8927 because it is still being used\nE0802 09:31:36.863534       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-1035/pvc-rjwxj: storageclass.storage.k8s.io \"provisioning-1035\" not found\nI0802 09:31:36.863806       1 event.go:291] \"Event occurred\" object=\"provisioning-1035/pvc-rjwxj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1035\\\" not found\"\nI0802 09:31:37.058266       1 pv_controller.go:864] volume \"local-gk4xg\" entered phase \"Available\"\nI0802 09:31:37.272264       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-03c43059308180a28\") on node \"ip-172-20-47-13.ap-southeast-2.compute.internal\" \nI0802 09:31:37.275091       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-03c43059308180a28\") on node \"ip-172-20-47-13.ap-southeast-2.compute.internal\" \nI0802 09:31:38.287614       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2168/pvc-5wtzr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2168\\\" or manually created by system administrator\"\nI0802 09:31:38.329299       1 pv_controller.go:864] volume \"pvc-4bad0faf-5edc-49e4-97dd-0f773a63f216\" entered phase \"Bound\"\nI0802 09:31:38.329439       1 pv_controller.go:967] volume \"pvc-4bad0faf-5edc-49e4-97dd-0f773a63f216\" bound to claim \"csi-mock-volumes-2168/pvc-5wtzr\"\nI0802 09:31:38.341818       1 pv_controller.go:808] claim \"csi-mock-volumes-2168/pvc-5wtzr\" entered phase \"Bound\"\nI0802 09:31:38.545019       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-2788/pod-0a563f57-3a19-475e-b10d-d5a5d6bbf372 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-d8927 pvc- persistent-local-volumes-test-2788  cf06ca51-78e4-462e-aac1-91dd22dc8e41 27633 0 2021-08-02 09:31:30 +0000 UTC 2021-08-02 09:31:36 +0000 UTC 0xc000c9b478 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:31:30 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:31:30 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvp8l7z,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-2788,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:38.545090       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-2788/pvc-d8927 because it is still being used\nI0802 09:31:38.616732       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-southeast-2a/vol-0c9068da7fc32fe35\nI0802 09:31:38.668491       1 pv_controller.go:1652] volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" provisioned for claim \"volume-6111/aws97kgv\"\nI0802 09:31:38.668849       1 event.go:291] \"Event occurred\" object=\"volume-6111/aws97kgv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546 using kubernetes.io/aws-ebs\"\nI0802 09:31:38.672498       1 pv_controller.go:864] volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" entered phase \"Bound\"\nI0802 09:31:38.672522       1 pv_controller.go:967] volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" bound to claim \"volume-6111/aws97kgv\"\nI0802 09:31:38.677718       1 pv_controller.go:808] claim \"volume-6111/aws97kgv\" entered phase \"Bound\"\nI0802 09:31:38.711169       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-3732\nI0802 09:31:38.740348       1 pv_controller.go:915] claim \"provisioning-5488/pvc-h2zwn\" bound to volume \"local-w5xbx\"\nI0802 09:31:38.743642       1 pv_controller.go:1326] isVolumeReleased[pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa]: volume is released\nI0802 09:31:38.749778       1 pv_controller.go:864] volume \"local-w5xbx\" entered phase \"Bound\"\nI0802 09:31:38.750256       1 pv_controller.go:967] volume \"local-w5xbx\" bound to claim \"provisioning-5488/pvc-h2zwn\"\nI0802 09:31:38.755520       1 pv_controller.go:808] claim \"provisioning-5488/pvc-h2zwn\" entered phase \"Bound\"\nI0802 09:31:38.755654       1 pv_controller.go:915] claim \"provisioning-1035/pvc-rjwxj\" bound to volume \"local-gk4xg\"\nI0802 09:31:38.760990       1 pv_controller.go:864] volume \"local-gk4xg\" entered phase \"Bound\"\nI0802 09:31:38.761093       1 pv_controller.go:967] volume \"local-gk4xg\" bound to claim \"provisioning-1035/pvc-rjwxj\"\nI0802 09:31:38.765990       1 pv_controller.go:808] claim \"provisioning-1035/pvc-rjwxj\" entered phase \"Bound\"\nI0802 09:31:38.766445       1 pv_controller.go:915] claim \"provisioning-11/pvc-5brt6\" bound to volume \"local-swtg4\"\nI0802 09:31:38.770744       1 pv_controller.go:864] volume \"local-swtg4\" entered phase \"Bound\"\nI0802 09:31:38.770923       1 pv_controller.go:967] volume \"local-swtg4\" bound to claim \"provisioning-11/pvc-5brt6\"\nI0802 09:31:38.776195       1 pv_controller.go:808] claim \"provisioning-11/pvc-5brt6\" entered phase \"Bound\"\nI0802 09:31:38.776648       1 pv_controller.go:915] claim \"volumemode-2636/pvc-l54r9\" bound to volume \"local-hc7gp\"\nI0802 09:31:38.781950       1 pv_controller.go:864] volume \"local-hc7gp\" entered phase \"Bound\"\nI0802 09:31:38.781972       1 pv_controller.go:967] volume \"local-hc7gp\" bound to claim \"volumemode-2636/pvc-l54r9\"\nI0802 09:31:38.787220       1 pv_controller.go:808] claim \"volumemode-2636/pvc-l54r9\" entered phase \"Bound\"\nI0802 09:31:38.901084       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-03c43059308180a28: error deleting EBS volume \"vol-03c43059308180a28\" since volume is currently attached to \"i-0a44735e77bbb5a11\"\nE0802 09:31:38.901174       1 goroutinemap.go:150] Operation for \"delete-pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa[b68f4c5f-e77c-4cff-8a5d-9155ce59a6ba]\" failed. No retries permitted until 2021-08-02 09:31:39.901144034 +0000 UTC m=+381.845709967 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-03c43059308180a28\\\" since volume is currently attached to \\\"i-0a44735e77bbb5a11\\\"\"\nI0802 09:31:38.901290       1 event.go:291] \"Event occurred\" object=\"pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-03c43059308180a28\\\" since volume is currently attached to \\\"i-0a44735e77bbb5a11\\\"\"\nI0802 09:31:39.275194       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0c9068da7fc32fe35\") from node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:31:39.311342       1 aws.go:2014] Assigned mount device bj -> volume vol-0c9068da7fc32fe35\nI0802 09:31:39.572470       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nI0802 09:31:39.679376       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-295/slow-terminating-unready-pod\" need=1 creating=1\nI0802 09:31:39.683572       1 event.go:291] \"Event occurred\" object=\"services-295/slow-terminating-unready-pod\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: slow-terminating-unready-pod-kmktz\"\nI0802 09:31:39.704890       1 aws.go:2427] AttachVolume volume=\"vol-0c9068da7fc32fe35\" instance=\"i-002cc620c967da679\" request returned {\n  AttachTime: 2021-08-02 09:31:39.704 +0000 UTC,\n  Device: \"/dev/xvdbj\",\n  InstanceId: \"i-002cc620c967da679\",\n  State: \"attaching\",\n  VolumeId: \"vol-0c9068da7fc32fe35\"\n}\nI0802 09:31:39.873720       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:40.344901       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nE0802 09:31:40.421383       1 tokens_controller.go:262] error synchronizing serviceaccount clientset-4893/default: secrets \"default-token-z729g\" is forbidden: unable to create new content in namespace clientset-4893 because it is being terminated\nI0802 09:31:40.579295       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:40.582599       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nI0802 09:31:40.595472       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" need=40 creating=40\nI0802 09:31:40.612353       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-djpzw\"\nI0802 09:31:40.621387       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-gpc8p\"\nI0802 09:31:40.621764       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-g885n\"\nI0802 09:31:40.637130       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-jczg2\"\nI0802 09:31:40.638370       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-mf5nj\"\nI0802 09:31:40.642677       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-p8bxz\"\nI0802 09:31:40.642856       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2hc8n\"\nI0802 09:31:40.657817       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-nplcd\"\nI0802 09:31:40.660006       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-rb289\"\nI0802 09:31:40.660845       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-brks5\"\nI0802 09:31:40.661009       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-9f7mf\"\nI0802 09:31:40.664225       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-v2w2c\"\nI0802 09:31:40.672724       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-qsxdw\"\nI0802 09:31:40.672971       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-8f9n8\"\nI0802 09:31:40.673331       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-t6nf7\"\nI0802 09:31:40.696438       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-jzfct\"\nI0802 09:31:40.696809       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-vmkjf\"\nI0802 09:31:40.713275       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-x4d45\"\nI0802 09:31:40.713624       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-pxrwj\"\nI0802 09:31:40.714616       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-7dw5d\"\nI0802 09:31:40.714820       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2hrw7\"\nI0802 09:31:40.714919       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2496m\"\nI0802 09:31:40.715023       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-72wnq\"\nI0802 09:31:40.715189       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2qh2q\"\nI0802 09:31:40.715530       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-d28nn\"\nI0802 09:31:40.715642       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-dnj29\"\nI0802 09:31:40.751718       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-kzs6v\"\nI0802 09:31:40.799163       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-5dtxg\"\nI0802 09:31:40.848202       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-b45sg\"\nI0802 09:31:40.877454       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:40.900874       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-9gvd5\"\nI0802 09:31:40.948835       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-s9p2x\"\nI0802 09:31:41.049663       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-zbzcc\"\nI0802 09:31:41.099493       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-kqjp8\"\nI0802 09:31:41.123508       1 namespace_controller.go:185] Namespace has been deleted deployment-5419\nI0802 09:31:41.148806       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-892x5\"\nI0802 09:31:41.198819       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-cn72t\"\nI0802 09:31:41.248155       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-z6g22\"\nI0802 09:31:41.309256       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-hz9gv\"\nE0802 09:31:41.334265       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-154/default: secrets \"default-token-s9cjr\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-154 because it is being terminated\nI0802 09:31:41.349231       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-lpbp8\"\nI0802 09:31:41.384078       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\") from node \"ip-172-20-56-163.ap-southeast-2.compute.internal\" \nE0802 09:31:41.387468       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:41.400442       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-bp7b9\"\nI0802 09:31:41.432262       1 aws.go:2014] Assigned mount device cj -> volume vol-016a3609f5cd6ac92\nI0802 09:31:41.449884       1 event.go:291] \"Event occurred\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-6tx9j\"\nE0802 09:31:41.499339       1 aws.go:2405] \"error attaching EBS volume \\\"vol-016a3609f5cd6ac92\\\"\" to instance \"i-0f96665b2ac6ca911\" since volume is in \"creating\" state\nI0802 09:31:41.499362       1 aws.go:2037] Releasing in-process attachment entry: cj -> volume vol-016a3609f5cd6ac92\nE0802 09:31:41.499369       1 attacher.go:86] Error attaching volume \"aws://ap-southeast-2a/vol-016a3609f5cd6ac92\" to node \"ip-172-20-56-163.ap-southeast-2.compute.internal\": \"error attaching EBS volume \\\"vol-016a3609f5cd6ac92\\\"\" to instance \"i-0f96665b2ac6ca911\" since volume is in \"creating\" state\nE0802 09:31:41.499634       1 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92 podName: nodeName:ip-172-20-56-163.ap-southeast-2.compute.internal}\" failed. No retries permitted until 2021-08-02 09:31:41.999548223 +0000 UTC m=+383.944114150 (durationBeforeRetry 500ms). Error: \"AttachVolume.Attach failed for volume \\\"vol1\\\" (UniqueName: \\\"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\\\") from node \\\"ip-172-20-56-163.ap-southeast-2.compute.internal\\\" : \\\"error attaching EBS volume \\\\\\\"vol-016a3609f5cd6ac92\\\\\\\"\\\" to instance \\\"i-0f96665b2ac6ca911\\\" since volume is in \\\"creating\\\" state\"\nI0802 09:31:41.499707       1 event.go:291] \"Event occurred\" object=\"volume-4873/exec-volume-test-inlinevolume-wbnv\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"AttachVolume.Attach failed for volume \\\"vol1\\\" : \\\"error attaching EBS volume \\\\\\\"vol-016a3609f5cd6ac92\\\\\\\"\\\" to instance \\\"i-0f96665b2ac6ca911\\\" since volume is in \\\"creating\\\" state\"\nI0802 09:31:41.735529       1 namespace_controller.go:185] Namespace has been deleted provisioning-8941\nI0802 09:31:41.793828       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-2117\nI0802 09:31:41.816358       1 aws.go:2037] Releasing in-process attachment entry: bj -> volume vol-0c9068da7fc32fe35\nI0802 09:31:41.816420       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0c9068da7fc32fe35\") from node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:31:41.816558       1 event.go:291] \"Event occurred\" object=\"volume-6111/exec-volume-test-dynamicpv-zs2k\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\\\" \"\nE0802 09:31:41.952740       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-376/default: secrets \"default-token-b9jrj\" is forbidden: unable to create new content in namespace provisioning-376 because it is being terminated\nI0802 09:31:42.010526       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\") from node \"ip-172-20-56-163.ap-southeast-2.compute.internal\" \nI0802 09:31:42.055280       1 aws.go:2014] Assigned mount device by -> volume vol-016a3609f5cd6ac92\nE0802 09:31:42.100776       1 aws.go:2405] \"error attaching EBS volume \\\"vol-016a3609f5cd6ac92\\\"\" to instance \"i-0f96665b2ac6ca911\" since volume is in \"creating\" state\nI0802 09:31:42.100873       1 aws.go:2037] Releasing in-process attachment entry: by -> volume vol-016a3609f5cd6ac92\nE0802 09:31:42.100896       1 attacher.go:86] Error attaching volume \"aws://ap-southeast-2a/vol-016a3609f5cd6ac92\" to node \"ip-172-20-56-163.ap-southeast-2.compute.internal\": \"error attaching EBS volume \\\"vol-016a3609f5cd6ac92\\\"\" to instance \"i-0f96665b2ac6ca911\" since volume is in \"creating\" state\nI0802 09:31:42.101006       1 actual_state_of_world.go:350] Volume \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\" is already added to attachedVolume list to node \"ip-172-20-56-163.ap-southeast-2.compute.internal\", update device path \"\"\nE0802 09:31:42.101152       1 nestedpendingoperations.go:301] Operation for \"{volumeName:kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92 podName: nodeName:ip-172-20-56-163.ap-southeast-2.compute.internal}\" failed. No retries permitted until 2021-08-02 09:31:43.101095432 +0000 UTC m=+385.045661346 (durationBeforeRetry 1s). Error: \"AttachVolume.Attach failed for volume \\\"vol1\\\" (UniqueName: \\\"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\\\") from node \\\"ip-172-20-56-163.ap-southeast-2.compute.internal\\\" : \\\"error attaching EBS volume \\\\\\\"vol-016a3609f5cd6ac92\\\\\\\"\\\" to instance \\\"i-0f96665b2ac6ca911\\\" since volume is in \\\"creating\\\" state\"\nI0802 09:31:42.101301       1 event.go:291] \"Event occurred\" object=\"volume-4873/exec-volume-test-inlinevolume-wbnv\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"AttachVolume.Attach failed for volume \\\"vol1\\\" : \\\"error attaching EBS volume \\\\\\\"vol-016a3609f5cd6ac92\\\\\\\"\\\" to instance \\\"i-0f96665b2ac6ca911\\\" since volume is in \\\"creating\\\" state\"\nI0802 09:31:42.452585       1 event.go:291] \"Event occurred\" object=\"volume-expand-5557/awsb9ck4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0802 09:31:42.872419       1 aws.go:2291] Waiting for volume \"vol-03c43059308180a28\" state: actual=detaching, desired=detached\nI0802 09:31:43.111794       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\") from node \"ip-172-20-56-163.ap-southeast-2.compute.internal\" \nI0802 09:31:43.157563       1 aws.go:2014] Assigned mount device be -> volume vol-016a3609f5cd6ac92\nI0802 09:31:43.242138       1 event.go:291] \"Event occurred\" object=\"volume-7834/awsmkhrs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0802 09:31:43.280643       1 route_controller.go:294] set node ip-172-20-35-97.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:43.283017       1 route_controller.go:294] set node ip-172-20-47-13.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:43.283030       1 route_controller.go:294] set node ip-172-20-43-68.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:43.283041       1 route_controller.go:294] set node ip-172-20-48-162.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:43.283051       1 route_controller.go:294] set node ip-172-20-56-163.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nE0802 09:31:43.527285       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-695/default: secrets \"default-token-vmbmj\" is forbidden: unable to create new content in namespace provisioning-695 because it is being terminated\nI0802 09:31:43.537694       1 aws.go:2427] AttachVolume volume=\"vol-016a3609f5cd6ac92\" instance=\"i-0f96665b2ac6ca911\" request returned {\n  AttachTime: 2021-08-02 09:31:43.526 +0000 UTC,\n  Device: \"/dev/xvdbe\",\n  InstanceId: \"i-0f96665b2ac6ca911\",\n  State: \"attaching\",\n  VolumeId: \"vol-016a3609f5cd6ac92\"\n}\nE0802 09:31:44.111557       1 tokens_controller.go:262] error synchronizing serviceaccount tables-5959/default: secrets \"default-token-9dtrv\" is forbidden: unable to create new content in namespace tables-5959 because it is being terminated\nI0802 09:31:44.992512       1 aws.go:2291] Waiting for volume \"vol-03c43059308180a28\" state: actual=detaching, desired=detached\nI0802 09:31:45.642425       1 aws.go:2037] Releasing in-process attachment entry: be -> volume vol-016a3609f5cd6ac92\nI0802 09:31:45.642637       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\") from node \"ip-172-20-56-163.ap-southeast-2.compute.internal\" \nI0802 09:31:45.643083       1 actual_state_of_world.go:350] Volume \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\" is already added to attachedVolume list to node \"ip-172-20-56-163.ap-southeast-2.compute.internal\", update device path \"/dev/xvdbe\"\nI0802 09:31:45.642796       1 event.go:291] \"Event occurred\" object=\"volume-4873/exec-volume-test-inlinevolume-wbnv\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"vol1\\\" \"\nE0802 09:31:45.988676       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0802 09:31:46.366724       1 tokens_controller.go:262] error synchronizing serviceaccount subpath-7154/default: secrets \"default-token-sgz5h\" is forbidden: unable to create new content in namespace subpath-7154 because it is being terminated\nE0802 09:31:46.402104       1 tokens_controller.go:262] error synchronizing serviceaccount subpath-1922/default: secrets \"default-token-6slwh\" is forbidden: unable to create new content in namespace subpath-1922 because it is being terminated\nI0802 09:31:46.465317       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-2788/pod-0a563f57-3a19-475e-b10d-d5a5d6bbf372 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-d8927 pvc- persistent-local-volumes-test-2788  cf06ca51-78e4-462e-aac1-91dd22dc8e41 27633 0 2021-08-02 09:31:30 +0000 UTC 2021-08-02 09:31:36 +0000 UTC 0xc000c9b478 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-08-02 09:31:30 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-08-02 09:31:30 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvp8l7z,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-2788,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0802 09:31:46.466387       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-2788/pvc-d8927 because it is still being used\nI0802 09:31:46.487411       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-154\nI0802 09:31:46.503586       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-2788/pvc-d8927 is unused\nI0802 09:31:46.555898       1 pv_controller.go:638] volume \"local-pvp8l7z\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:46.560569       1 pv_controller.go:864] volume \"local-pvp8l7z\" entered phase \"Released\"\nI0802 09:31:46.571595       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-2788/pvc-d8927\" was already processed\nE0802 09:31:46.642000       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:47.029907       1 namespace_controller.go:185] Namespace has been deleted provisioning-376\nE0802 09:31:47.450273       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0802 09:31:48.033639       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:48.223811       1 namespace_controller.go:185] Namespace has been deleted secrets-4131\nI0802 09:31:48.562119       1 namespace_controller.go:185] Namespace has been deleted provisioning-695\nI0802 09:31:48.593823       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-6737\nI0802 09:31:48.977693       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-southeast-2a/vol-097990a51f0ecfd41\nI0802 09:31:48.981652       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nI0802 09:31:49.022604       1 pv_controller.go:1652] volume \"pvc-49d7569b-9f1c-4ac7-aff3-a93975df5d46\" provisioned for claim \"volume-7834/awsmkhrs\"\nI0802 09:31:49.022915       1 event.go:291] \"Event occurred\" object=\"volume-7834/awsmkhrs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-49d7569b-9f1c-4ac7-aff3-a93975df5d46 using kubernetes.io/aws-ebs\"\nI0802 09:31:49.026567       1 pv_controller.go:864] volume \"pvc-49d7569b-9f1c-4ac7-aff3-a93975df5d46\" entered phase \"Bound\"\nI0802 09:31:49.026605       1 pv_controller.go:967] volume \"pvc-49d7569b-9f1c-4ac7-aff3-a93975df5d46\" bound to claim \"volume-7834/awsmkhrs\"\nI0802 09:31:49.031653       1 pv_controller.go:808] claim \"volume-7834/awsmkhrs\" entered phase \"Bound\"\nI0802 09:31:49.061092       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-08-02 09:30:38 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcg\",\n  InstanceId: \"i-0a44735e77bbb5a11\",\n  State: \"detaching\",\n  VolumeId: \"vol-03c43059308180a28\"\n}\nI0802 09:31:49.061132       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-03c43059308180a28\") on node \"ip-172-20-47-13.ap-southeast-2.compute.internal\" \nI0802 09:31:49.220244       1 namespace_controller.go:185] Namespace has been deleted tables-5959\nE0802 09:31:49.448338       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:49.682482       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-49d7569b-9f1c-4ac7-aff3-a93975df5d46\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-097990a51f0ecfd41\") from node \"ip-172-20-47-13.ap-southeast-2.compute.internal\" \nI0802 09:31:49.757344       1 aws.go:2014] Assigned mount device bm -> volume vol-097990a51f0ecfd41\nI0802 09:31:49.827050       1 expand_controller.go:277] Ignoring the PVC \"csi-mock-volumes-2168/pvc-5wtzr\" (uid: \"4bad0faf-5edc-49e4-97dd-0f773a63f216\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0802 09:31:49.827382       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2168/pvc-5wtzr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0802 09:31:49.867856       1 namespace_controller.go:185] Namespace has been deleted pv-5237\nE0802 09:31:49.872528       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:49.985591       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nI0802 09:31:50.168313       1 aws.go:2427] AttachVolume volume=\"vol-097990a51f0ecfd41\" instance=\"i-0a44735e77bbb5a11\" request returned {\n  AttachTime: 2021-08-02 09:31:50.153 +0000 UTC,\n  Device: \"/dev/xvdbm\",\n  InstanceId: \"i-0a44735e77bbb5a11\",\n  State: \"attaching\",\n  VolumeId: \"vol-097990a51f0ecfd41\"\n}\nE0802 09:31:50.200156       1 tokens_controller.go:262] error synchronizing serviceaccount lease-test-3806/default: secrets \"default-token-99ht4\" is forbidden: unable to create new content in namespace lease-test-3806 because it is being terminated\nI0802 09:31:50.501399       1 pvc_protection_controller.go:291] PVC provisioning-1035/pvc-rjwxj is unused\nI0802 09:31:50.506771       1 pv_controller.go:638] volume \"local-gk4xg\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:50.509796       1 pv_controller.go:864] volume \"local-gk4xg\" entered phase \"Released\"\nI0802 09:31:50.513815       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nI0802 09:31:50.694414       1 pv_controller_base.go:504] deletion of claim \"provisioning-1035/pvc-rjwxj\" was already processed\nI0802 09:31:51.722351       1 namespace_controller.go:185] Namespace has been deleted subpath-1922\nI0802 09:31:51.732302       1 namespace_controller.go:185] Namespace has been deleted subpath-7154\nI0802 09:31:51.793852       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2788\nI0802 09:31:52.014901       1 namespace_controller.go:185] Namespace has been deleted projected-8448\nI0802 09:31:52.281451       1 aws.go:2037] Releasing in-process attachment entry: bm -> volume vol-097990a51f0ecfd41\nI0802 09:31:52.281603       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-49d7569b-9f1c-4ac7-aff3-a93975df5d46\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-097990a51f0ecfd41\") from node \"ip-172-20-47-13.ap-southeast-2.compute.internal\" \nI0802 09:31:52.281711       1 event.go:291] \"Event occurred\" object=\"volume-7834/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-49d7569b-9f1c-4ac7-aff3-a93975df5d46\\\" \"\nE0802 09:31:52.364460       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:52.697285       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-468/test-quota\nI0802 09:31:52.808159       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"services-295/slow-terminating-unready-pod\" need=0 deleting=1\nE0802 09:31:52.808923       1 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{slow-terminating-unready-pod  services-295  e77fc447-0824-40b5-a7b2-40ecd52e735f 28313 2 2021-08-02 09:31:39 +0000 UTC <nil> <nil> map[name:slow-terminating-unready-pod testid:tolerate-unready-7c8c6e9d-809d-4a0a-a315-5033db029c33] map[] [] []  [{e2e.test Update v1 2021-08-02 09:31:39 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:name\":{},\"f:testid\":{}}},\"f:spec\":{\"f:replicas\":{},\"f:selector\":{\".\":{},\"f:name\":{}},\"f:template\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{},\"f:labels\":{\".\":{},\"f:name\":{},\"f:testid\":{}}},\"f:spec\":{\".\":{},\"f:containers\":{\".\":{},\"k:{\\\"name\\\":\\\"slow-terminating-unready-pod\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:lifecycle\":{\".\":{},\"f:preStop\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}}}},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":80,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:protocol\":{}}},\"f:readinessProbe\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}},\"f:failureThreshold\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}}} {kube-controller-manager Update v1 2021-08-02 09:31:39 +0000 UTC FieldsV1 {\"f:status\":{\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:replicas\":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: slow-terminating-unready-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:slow-terminating-unready-pod testid:tolerate-unready-7c8c6e9d-809d-4a0a-a315-5033db029c33] map[] [] []  []} {[] [] [{slow-terminating-unready-pod k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [netexec --http-port=80]  [{ 0 80 TCP }] [] [] {map[] map[]} [] [] nil Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/false],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil &Lifecycle{PostStart:nil,PreStop:&Handler{Exec:&ExecAction{Command:[/bin/sleep 600],},HTTPGet:nil,TCPSocket:nil,},} /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002af2ca8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}\nI0802 09:31:52.808974       1 controller_utils.go:604] \"Deleting pod\" controller=\"slow-terminating-unready-pod\" pod=\"services-295/slow-terminating-unready-pod-kmktz\"\nI0802 09:31:52.812284       1 event.go:291] \"Event occurred\" object=\"services-295/slow-terminating-unready-pod\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: slow-terminating-unready-pod-kmktz\"\nI0802 09:31:52.812550       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:53.002299       1 namespace_controller.go:185] Namespace has been deleted container-probe-635\nI0802 09:31:53.307869       1 route_controller.go:294] set node ip-172-20-56-163.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:53.307869       1 route_controller.go:294] set node ip-172-20-43-68.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:53.307881       1 route_controller.go:294] set node ip-172-20-35-97.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:53.307893       1 route_controller.go:294] set node ip-172-20-47-13.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:53.307918       1 route_controller.go:294] set node ip-172-20-48-162.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:31:53.381570       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:53.742834       1 event.go:291] \"Event occurred\" object=\"volume-expand-5557/awsb9ck4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0802 09:31:53.745217       1 pv_controller.go:1326] isVolumeReleased[pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa]: volume is released\nI0802 09:31:53.821502       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:53.933649       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-2a/vol-03c43059308180a28\nI0802 09:31:53.933676       1 pv_controller.go:1421] volume \"pvc-168b129d-e2b5-49cd-8733-06df7cd55dfa\" deleted\nI0802 09:31:53.942154       1 pv_controller_base.go:504] deletion of claim \"volume-5525/awssdqb4\" was already processed\nI0802 09:31:54.879056       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nE0802 09:31:54.933711       1 tokens_controller.go:262] error synchronizing serviceaccount gc-2385/default: secrets \"default-token-lkplh\" is forbidden: unable to create new content in namespace gc-2385 because it is being terminated\nI0802 09:31:54.940368       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"gc-2385/simpletest.deployment-5bc74fd66c\" need=2 creating=1\nI0802 09:31:55.253100       1 namespace_controller.go:185] Namespace has been deleted lease-test-3806\nE0802 09:31:55.407805       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:55.865533       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:55.875575       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nW0802 09:31:55.884454       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"services-2137/endpoint-test2\", retrying. Error: EndpointSlice informer cache is out of date\nI0802 09:31:55.893222       1 endpoints_controller.go:363] \"Error syncing endpoints, retrying\" service=\"services-2137/endpoint-test2\" err=\"Operation cannot be fulfilled on endpoints \\\"endpoint-test2\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0802 09:31:55.893272       1 event.go:291] \"Event occurred\" object=\"services-2137/endpoint-test2\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-2137/endpoint-test2: Operation cannot be fulfilled on endpoints \\\"endpoint-test2\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0802 09:31:56.001107       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0802 09:31:56.230277       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:56.428089       1 pvc_protection_controller.go:291] PVC provisioning-11/pvc-5brt6 is unused\nI0802 09:31:56.434940       1 pv_controller.go:638] volume \"local-swtg4\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:56.437898       1 pv_controller.go:864] volume \"local-swtg4\" entered phase \"Released\"\nI0802 09:31:56.621494       1 pv_controller_base.go:504] deletion of claim \"provisioning-11/pvc-5brt6\" was already processed\nI0802 09:31:56.849930       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nI0802 09:31:56.857330       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nI0802 09:31:56.884748       1 utils.go:413] couldn't find ipfamilies for headless service: services-2137/endpoint-test2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.50.254).\nE0802 09:31:57.213362       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:31:57.513873       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0bc3f5c1db447bf80\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:31:57.516016       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0bc3f5c1db447bf80\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:31:57.638669       1 garbagecollector.go:471] \"Processing object\" object=\"services-2137/endpoint-test2-p2pvj\" objectUID=e8482f61-776b-43aa-96cd-68a01c1c91ec kind=\"EndpointSlice\" virtual=false\nI0802 09:31:57.714830       1 namespace_controller.go:185] Namespace has been deleted resourcequota-468\nE0802 09:31:57.818610       1 tokens_controller.go:262] error synchronizing serviceaccount port-forwarding-3933/default: secrets \"default-token-h65lf\" is forbidden: unable to create new content in namespace port-forwarding-3933 because it is being terminated\nE0802 09:31:57.960275       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-8260/pvc-thpjr: storageclass.storage.k8s.io \"provisioning-8260\" not found\nI0802 09:31:57.960505       1 event.go:291] \"Event occurred\" object=\"provisioning-8260/pvc-thpjr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8260\\\" not found\"\nI0802 09:31:58.142620       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2137/endpoint-test2-p2pvj\" objectUID=e8482f61-776b-43aa-96cd-68a01c1c91ec kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:31:58.154380       1 pv_controller.go:864] volume \"local-5wl2r\" entered phase \"Available\"\nI0802 09:31:58.227826       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:58.235765       1 utils.go:413] couldn't find ipfamilies for headless service: services-295/tolerate-unready. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.114.99).\nI0802 09:31:58.476503       1 pvc_protection_controller.go:291] PVC provisioning-5488/pvc-h2zwn is unused\nI0802 09:31:58.485021       1 pv_controller.go:638] volume \"local-w5xbx\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:31:58.487622       1 pv_controller.go:864] volume \"local-w5xbx\" entered phase \"Released\"\nI0802 09:31:58.663914       1 pv_controller_base.go:504] deletion of claim \"provisioning-5488/pvc-h2zwn\" was already processed\nI0802 09:31:58.999899       1 garbagecollector.go:471] \"Processing object\" object=\"services-295/tolerate-unready-lg2mf\" objectUID=eb6ecd8b-fe66-43aa-a25d-f3ce74565cd4 kind=\"EndpointSlice\" virtual=false\nI0802 09:31:59.008156       1 garbagecollector.go:580] \"Deleting object\" object=\"services-295/tolerate-unready-lg2mf\" objectUID=eb6ecd8b-fe66-43aa-a25d-f3ce74565cd4 kind=\"EndpointSlice\" propagationPolicy=Background\nE0802 09:31:59.180373       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-test-92/default: secrets \"default-token-p2tc5\" is forbidden: unable to create new content in namespace security-context-test-92 because it is being terminated\nI0802 09:31:59.208208       1 pvc_protection_controller.go:291] PVC topology-5283/pvc-mj4xb is unused\nI0802 09:31:59.219456       1 pv_controller.go:638] volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" is released and reclaim policy \"Delete\" will be executed\nI0802 09:31:59.223890       1 pv_controller.go:864] volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" entered phase \"Released\"\nI0802 09:31:59.225789       1 pv_controller.go:1326] isVolumeReleased[pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd]: volume is released\nI0802 09:31:59.420986       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0bc3f5c1db447bf80: error deleting EBS volume \"vol-0bc3f5c1db447bf80\" since volume is currently attached to \"i-002cc620c967da679\"\nE0802 09:31:59.421342       1 goroutinemap.go:150] Operation for \"delete-pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd[dfb2ab2c-0be9-4e72-b02c-2c73baaa812d]\" failed. No retries permitted until 2021-08-02 09:31:59.921194217 +0000 UTC m=+401.865760148 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0bc3f5c1db447bf80\\\" since volume is currently attached to \\\"i-002cc620c967da679\\\"\"\nI0802 09:31:59.423821       1 event.go:291] \"Event occurred\" object=\"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0bc3f5c1db447bf80\\\" since volume is currently attached to \\\"i-002cc620c967da679\\\"\"\nI0802 09:31:59.480224       1 pv_controller.go:864] volume \"local-pvcsfmn\" entered phase \"Available\"\nI0802 09:31:59.666387       1 pv_controller.go:915] claim \"persistent-local-volumes-test-3846/pvc-csjcn\" bound to volume \"local-pvcsfmn\"\nI0802 09:31:59.672174       1 pv_controller.go:864] volume \"local-pvcsfmn\" entered phase \"Bound\"\nI0802 09:31:59.672215       1 pv_controller.go:967] volume \"local-pvcsfmn\" bound to claim \"persistent-local-volumes-test-3846/pvc-csjcn\"\nI0802 09:31:59.676930       1 pv_controller.go:808] claim \"persistent-local-volumes-test-3846/pvc-csjcn\" entered phase \"Bound\"\nE0802 09:31:59.949523       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:32:00.060093       1 namespace_controller.go:185] Namespace has been deleted gc-2385\nI0802 09:32:00.585828       1 utils.go:413] couldn't find ipfamilies for headless service: services-9578/hairpin-test. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.179.245).\nI0802 09:32:00.776660       1 utils.go:413] couldn't find ipfamilies for headless service: services-9578/hairpin-test. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.179.245).\nE0802 09:32:01.273039       1 pv_controller.go:1437] error finding provisioning plugin for claim resourcequota-2886/test-claim: storageclass.storage.k8s.io \"gold\" not found\nI0802 09:32:01.273253       1 event.go:291] \"Event occurred\" object=\"resourcequota-2886/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"gold\\\" not found\"\nI0802 09:32:01.285504       1 pvc_protection_controller.go:291] PVC volume-6111/aws97kgv is unused\nI0802 09:32:01.295088       1 pv_controller.go:638] volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" is released and reclaim policy \"Delete\" will be executed\nI0802 09:32:01.303025       1 pv_controller.go:864] volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" entered phase \"Released\"\nI0802 09:32:01.306813       1 pv_controller.go:1326] isVolumeReleased[pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546]: volume is released\nI0802 09:32:01.439505       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0c9068da7fc32fe35: error deleting EBS volume \"vol-0c9068da7fc32fe35\" since volume is currently attached to \"i-002cc620c967da679\"\nE0802 09:32:01.439685       1 goroutinemap.go:150] Operation for \"delete-pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546[b9318fe0-6d2d-4f1c-9b60-d99c474b1576]\" failed. No retries permitted until 2021-08-02 09:32:01.93966172 +0000 UTC m=+403.884227652 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0c9068da7fc32fe35\\\" since volume is currently attached to \\\"i-002cc620c967da679\\\"\"\nI0802 09:32:01.439922       1 event.go:291] \"Event occurred\" object=\"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0c9068da7fc32fe35\\\" since volume is currently attached to \\\"i-002cc620c967da679\\\"\"\nI0802 09:32:01.990918       1 utils.go:413] couldn't find ipfamilies for headless service: services-9578/hairpin-test. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.179.245).\nI0802 09:32:01.994834       1 namespace_controller.go:185] Namespace has been deleted security-context-test-5610\nI0802 09:32:02.050261       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [kubectl.example.com/v1, Resource=e2e-test-kubectl-7057-crds], removed: [crd-publish-openapi-test-foo.example.com/v1, Resource=e2e-test-crd-publish-openapi-773-crds crd-publish-openapi-test-waldo.example.com/v1beta1, Resource=e2e-test-crd-publish-openapi-411-crds resourcequota.example.com/v1, Resource=e2e-test-resourcequota-6631-crds]\nI0802 09:32:02.050375       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-kubectl-7057-crds.kubectl.example.com\nI0802 09:32:02.050421       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0802 09:32:02.050564       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (15h5m20.991945973s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0802 09:32:02.150546       1 shared_informer.go:247] Caches are synced for resource quota \nI0802 09:32:02.150738       1 resource_quota_controller.go:454] synced quota controller\nI0802 09:32:02.676102       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-p8bxz\" objectUID=525562ae-4d7b-4f56-85f1-4003712817ac kind=\"Pod\" virtual=false\nI0802 09:32:02.677894       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-vmkjf\" objectUID=dcd13514-6d4d-4e5e-ad91-4c57954a7583 kind=\"Pod\" virtual=false\nI0802 09:32:02.678151       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2496m\" objectUID=af524676-be61-43d5-8ddc-108a238a0b29 kind=\"Pod\" virtual=false\nI0802 09:32:02.678340       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-x4d45\" objectUID=94c30a4a-955a-44e6-be0e-ae905e4fd514 kind=\"Pod\" virtual=false\nI0802 09:32:02.678526       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-kqjp8\" objectUID=326e7314-b397-409b-ae2c-9273bdb00799 kind=\"Pod\" virtual=false\nI0802 09:32:02.678696       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-7dw5d\" objectUID=d6b7a8d5-0237-4a6f-b516-08901b350f06 kind=\"Pod\" virtual=false\nI0802 09:32:02.679325       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-pxrwj\" objectUID=b2cb12fc-ab32-4eae-a195-b27524f84e13 kind=\"Pod\" virtual=false\nI0802 09:32:02.680640       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2qh2q\" objectUID=33692902-338a-4b22-984f-0a89c5c07a5b kind=\"Pod\" virtual=false\nI0802 09:32:02.680968       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-gpc8p\" objectUID=f29a5ffd-582e-4809-8262-b41d9fe3f842 kind=\"Pod\" virtual=false\nI0802 09:32:02.681174       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-jczg2\" objectUID=28c3c865-a151-4084-aec6-03bebad4fdc7 kind=\"Pod\" virtual=false\nI0802 09:32:02.681357       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-v2w2c\" objectUID=b4342edc-6cdc-4db9-93f2-cbdc50a0c887 kind=\"Pod\" virtual=false\nI0802 09:32:02.681535       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-brks5\" objectUID=7ced1797-e801-4d24-a57e-f11c414546a8 kind=\"Pod\" virtual=false\nI0802 09:32:02.681714       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-jzfct\" objectUID=3ea712fa-0ae9-4d69-a847-b8e687782a0e kind=\"Pod\" virtual=false\nI0802 09:32:02.681878       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-5dtxg\" objectUID=2e027675-4009-4a59-9646-f29b28c0e0c4 kind=\"Pod\" virtual=false\nI0802 09:32:02.682069       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-9gvd5\" objectUID=b3946aef-511c-4caf-a2df-1486f3653b6a kind=\"Pod\" virtual=false\nI0802 09:32:02.682313       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-lpbp8\" objectUID=9c9b5bd9-91c1-49a2-9e5e-c3650be2da40 kind=\"Pod\" virtual=false\nI0802 09:32:02.682472       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-bp7b9\" objectUID=23aa0f8c-ca4b-4cbe-ad2f-028e09f760c2 kind=\"Pod\" virtual=false\nI0802 09:32:02.682689       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-6tx9j\" objectUID=dc5b67a7-00ec-423b-8f28-423aadbaac34 kind=\"Pod\" virtual=false\nI0802 09:32:02.682826       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2hrw7\" objectUID=c11cd8c3-a1b6-4672-a0bc-45cfa719301c kind=\"Pod\" virtual=false\nI0802 09:32:02.683065       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-g885n\" objectUID=e32a24cb-c46e-4a17-9fa3-d5501d480452 kind=\"Pod\" virtual=false\nI0802 09:32:02.690480       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-p8bxz\" objectUID=525562ae-4d7b-4f56-85f1-4003712817ac kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.697160       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-kqjp8\" objectUID=326e7314-b397-409b-ae2c-9273bdb00799 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.697517       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-lpbp8\" objectUID=9c9b5bd9-91c1-49a2-9e5e-c3650be2da40 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.697934       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2496m\" objectUID=af524676-be61-43d5-8ddc-108a238a0b29 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.698207       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-vmkjf\" objectUID=dcd13514-6d4d-4e5e-ad91-4c57954a7583 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.698450       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-gpc8p\" objectUID=f29a5ffd-582e-4809-8262-b41d9fe3f842 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.698742       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-pxrwj\" objectUID=b2cb12fc-ab32-4eae-a195-b27524f84e13 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.699016       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-7dw5d\" objectUID=d6b7a8d5-0237-4a6f-b516-08901b350f06 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.699304       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-jczg2\" objectUID=28c3c865-a151-4084-aec6-03bebad4fdc7 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.699562       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-5dtxg\" objectUID=2e027675-4009-4a59-9646-f29b28c0e0c4 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.699881       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-jzfct\" objectUID=3ea712fa-0ae9-4d69-a847-b8e687782a0e kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.700067       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-x4d45\" objectUID=94c30a4a-955a-44e6-be0e-ae905e4fd514 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.700270       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2qh2q\" objectUID=33692902-338a-4b22-984f-0a89c5c07a5b kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.700515       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-g885n\" objectUID=e32a24cb-c46e-4a17-9fa3-d5501d480452 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.701041       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-9gvd5\" objectUID=b3946aef-511c-4caf-a2df-1486f3653b6a kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.701246       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-v2w2c\" objectUID=b4342edc-6cdc-4db9-93f2-cbdc50a0c887 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.701474       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-brks5\" objectUID=7ced1797-e801-4d24-a57e-f11c414546a8 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.701699       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-6tx9j\" objectUID=dc5b67a7-00ec-423b-8f28-423aadbaac34 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.701903       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2hrw7\" objectUID=c11cd8c3-a1b6-4672-a0bc-45cfa719301c kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.702192       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-bp7b9\" objectUID=23aa0f8c-ca4b-4cbe-ad2f-028e09f760c2 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:02.702321       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-t6nf7\" objectUID=845de677-0cdf-4126-852a-7e074898b90a kind=\"Pod\" virtual=false\nI0802 09:32:02.719871       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-d28nn\" objectUID=2154a198-ca7f-421f-99fd-c1eaaa374c6a kind=\"Pod\" virtual=false\nI0802 09:32:02.722544       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-72wnq\" objectUID=3eb812e2-e54a-446a-88e9-aa7fa0b4b1df kind=\"Pod\" virtual=false\nI0802 09:32:02.722764       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-kzs6v\" objectUID=a51996b4-b5d1-4051-a8e1-02e213b4f899 kind=\"Pod\" virtual=false\nI0802 09:32:02.722939       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-cn72t\" objectUID=1625de95-e705-45b2-a4e7-c6c62794d8ac kind=\"Pod\" virtual=false\nI0802 09:32:02.723093       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-nplcd\" objectUID=ebe3f71f-648a-4eba-bbb0-854fd8a7c207 kind=\"Pod\" virtual=false\nI0802 09:32:02.723229       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-8f9n8\" objectUID=deb6dad2-9747-41d6-a62d-62d90018e514 kind=\"Pod\" virtual=false\nI0802 09:32:02.723377       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-qsxdw\" objectUID=737d61ff-4064-4e99-9935-3a765e5a2957 kind=\"Pod\" virtual=false\nI0802 09:32:02.723503       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-dnj29\" objectUID=b8766295-105b-4a0d-b734-468af11c99f7 kind=\"Pod\" virtual=false\nI0802 09:32:02.723616       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-s9p2x\" objectUID=85059bde-d938-4aaf-9e29-247c52819bfd kind=\"Pod\" virtual=false\nI0802 09:32:02.743282       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-djpzw\" objectUID=895bf3f1-912e-47c7-9852-dc18346e34a7 kind=\"Pod\" virtual=false\nE0802 09:32:02.752390       1 tokens_controller.go:262] error synchronizing serviceaccount volume-5525/default: secrets \"default-token-9fhcb\" is forbidden: unable to create new content in namespace volume-5525 because it is being terminated\nI0802 09:32:02.781226       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2hc8n\" objectUID=2f5ee596-fc41-4523-a9f7-33cf578195a6 kind=\"Pod\" virtual=false\nI0802 09:32:02.831708       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-mf5nj\" objectUID=c46bb9fa-b15d-4dec-b739-bd9ce20ea6f8 kind=\"Pod\" virtual=false\nI0802 09:32:02.852575       1 namespace_controller.go:185] Namespace has been deleted provisioning-1035\nE0802 09:32:02.853917       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:32:02.883105       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-rb289\" objectUID=6e3817dc-4933-41c1-bdba-e62dca0fc702 kind=\"Pod\" virtual=false\nI0802 09:32:02.931702       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-hz9gv\" objectUID=b8fb7c57-e115-4391-93ed-ad64e709b717 kind=\"Pod\" virtual=false\nI0802 09:32:02.933182       1 aws.go:2291] Waiting for volume \"vol-0bc3f5c1db447bf80\" state: actual=detaching, desired=detached\nI0802 09:32:02.981312       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-9f7mf\" objectUID=5f87ce67-147e-48d3-b549-c23a1b2b2fba kind=\"Pod\" virtual=false\nI0802 09:32:02.995594       1 utils.go:413] couldn't find ipfamilies for headless service: services-9578/hairpin-test. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.179.245).\nI0802 09:32:03.033238       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-b45sg\" objectUID=8aae793f-1f9e-4476-b575-616cb5c48b8b kind=\"Pod\" virtual=false\nI0802 09:32:03.084034       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-zbzcc\" objectUID=7c0c123f-6240-42e5-81d5-f48615404a36 kind=\"Pod\" virtual=false\nE0802 09:32:03.090767       1 tokens_controller.go:262] error synchronizing serviceaccount services-2137/default: secrets \"default-token-9pc69\" is forbidden: unable to create new content in namespace services-2137 because it is being terminated\nI0802 09:32:03.131817       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-892x5\" objectUID=9485a0e5-ae9e-4048-9ccd-acb885ac3e2b kind=\"Pod\" virtual=false\nI0802 09:32:03.181849       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-z6g22\" objectUID=55eca77d-e399-4bd8-b39a-6dbe4f9e81d0 kind=\"Pod\" virtual=false\nI0802 09:32:03.230684       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-t6nf7\" objectUID=845de677-0cdf-4126-852a-7e074898b90a kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.254201       1 route_controller.go:294] set node ip-172-20-35-97.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:03.254201       1 route_controller.go:294] set node ip-172-20-48-162.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:03.254214       1 route_controller.go:294] set node ip-172-20-43-68.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:03.254223       1 route_controller.go:294] set node ip-172-20-47-13.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:03.254233       1 route_controller.go:294] set node ip-172-20-56-163.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:03.279401       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-d28nn\" objectUID=2154a198-ca7f-421f-99fd-c1eaaa374c6a kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.329039       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-72wnq\" objectUID=3eb812e2-e54a-446a-88e9-aa7fa0b4b1df kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.379550       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-kzs6v\" objectUID=a51996b4-b5d1-4051-a8e1-02e213b4f899 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.429242       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-cn72t\" objectUID=1625de95-e705-45b2-a4e7-c6c62794d8ac kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.479309       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-nplcd\" objectUID=ebe3f71f-648a-4eba-bbb0-854fd8a7c207 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.529224       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-8f9n8\" objectUID=deb6dad2-9747-41d6-a62d-62d90018e514 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.580301       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-qsxdw\" objectUID=737d61ff-4064-4e99-9935-3a765e5a2957 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.629495       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-dnj29\" objectUID=b8766295-105b-4a0d-b734-468af11c99f7 kind=\"Pod\" propagationPolicy=Background\nE0802 09:32:03.654632       1 pv_controller.go:1437] error finding provisioning plugin for claim resourcequota-2886/test-claim: storageclass.storage.k8s.io \"gold\" not found\nI0802 09:32:03.655071       1 event.go:291] \"Event occurred\" object=\"resourcequota-2886/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"gold\\\" not found\"\nI0802 09:32:03.657230       1 pvc_protection_controller.go:291] PVC resourcequota-2886/test-claim is unused\nI0802 09:32:03.679018       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-s9p2x\" objectUID=85059bde-d938-4aaf-9e29-247c52819bfd kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.729024       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-djpzw\" objectUID=895bf3f1-912e-47c7-9852-dc18346e34a7 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.779874       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-2hc8n\" objectUID=2f5ee596-fc41-4523-a9f7-33cf578195a6 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.829297       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-mf5nj\" objectUID=c46bb9fa-b15d-4dec-b739-bd9ce20ea6f8 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.879214       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-rb289\" objectUID=6e3817dc-4933-41c1-bdba-e62dca0fc702 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.929406       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-hz9gv\" objectUID=b8fb7c57-e115-4391-93ed-ad64e709b717 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:03.979965       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-9f7mf\" objectUID=5f87ce67-147e-48d3-b549-c23a1b2b2fba kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:04.032418       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-b45sg\" objectUID=8aae793f-1f9e-4476-b575-616cb5c48b8b kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:04.081092       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-zbzcc\" objectUID=7c0c123f-6240-42e5-81d5-f48615404a36 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:04.129333       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-892x5\" objectUID=9485a0e5-ae9e-4048-9ccd-acb885ac3e2b kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:04.179590       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-6512/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-z6g22\" objectUID=55eca77d-e399-4bd8-b39a-6dbe4f9e81d0 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:04.240344       1 namespace_controller.go:185] Namespace has been deleted security-context-test-92\nI0802 09:32:04.331489       1 request.go:655] Throttling request took 1.002286842s, request: DELETE:https://127.0.0.1/api/v1/namespaces/kubelet-6512/pods/cleanup40-1ceca016-1c74-4cac-8712-207f1efae5b3-72wnq\nI0802 09:32:04.339787       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-1627896720\"\nI0802 09:32:04.353578       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent-1627896720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-1627896720-z48fs\"\nI0802 09:32:04.357601       1 cronjob_controller.go:188] Unable to update status for cronjob-6503/concurrent (rv = 26814): Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0802 09:32:04.803487       1 namespace_controller.go:185] Namespace has been deleted nettest-7423\nE0802 09:32:04.971181       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:32:04.992001       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-08-02 09:31:26 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcw\",\n  InstanceId: \"i-002cc620c967da679\",\n  State: \"detaching\",\n  VolumeId: \"vol-0bc3f5c1db447bf80\"\n}\nI0802 09:32:04.992044       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0bc3f5c1db447bf80\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:32:05.051367       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8940-9229/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0802 09:32:05.241026       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8940-9229/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE0802 09:32:05.479025       1 tokens_controller.go:262] error synchronizing serviceaccount crd-publish-openapi-651/default: secrets \"default-token-qxjbp\" is forbidden: unable to create new content in namespace crd-publish-openapi-651 because it is being terminated\nI0802 09:32:05.660153       1 namespace_controller.go:185] Namespace has been deleted proxy-3162\nE0802 09:32:06.090291       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:32:06.495151       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [kubectl.example.com/v1, Resource=e2e-test-kubectl-7057-crds mygroup.example.com/v1beta1, Resource=noxus], removed: [crd-publish-openapi-test-foo.example.com/v1, Resource=e2e-test-crd-publish-openapi-773-crds crd-publish-openapi-test-waldo.example.com/v1beta1, Resource=e2e-test-crd-publish-openapi-411-crds resourcequota.example.com/v1, Resource=e2e-test-resourcequota-6631-crds]\nI0802 09:32:06.640656       1 namespace_controller.go:185] Namespace has been deleted clientset-4893\nI0802 09:32:06.786743       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent-1627896420\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0802 09:32:07.032271       1 namespace_controller.go:185] Namespace has been deleted volume-5790\nI0802 09:32:07.046969       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0802 09:32:07.047152       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (15h5m20.991945973s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0802 09:32:07.147139       1 shared_informer.go:247] Caches are synced for garbage collector \nI0802 09:32:07.147168       1 garbagecollector.go:254] synced garbage collector\nE0802 09:32:07.300571       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-2872/default: secrets \"default-token-mhftd\" is forbidden: unable to create new content in namespace ephemeral-2872 because it is being terminated\nE0802 09:32:07.376665       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:32:07.645172       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0c9068da7fc32fe35\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:32:07.648087       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0c9068da7fc32fe35\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:32:07.879382       1 namespace_controller.go:185] Namespace has been deleted volume-5525\nI0802 09:32:08.011267       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-3933\nE0802 09:32:08.145509       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:32:08.174549       1 namespace_controller.go:185] Namespace has been deleted services-2137\nI0802 09:32:08.578488       1 pv_controller.go:864] volume \"local-pvks5pn\" entered phase \"Available\"\nI0802 09:32:08.742039       1 pv_controller.go:915] claim \"provisioning-8260/pvc-thpjr\" bound to volume \"local-5wl2r\"\nI0802 09:32:08.742046       1 event.go:291] \"Event occurred\" object=\"volume-expand-5557/awsb9ck4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0802 09:32:08.746799       1 pv_controller.go:1326] isVolumeReleased[pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546]: volume is released\nI0802 09:32:08.748333       1 pv_controller.go:1326] isVolumeReleased[pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd]: volume is released\nI0802 09:32:08.751746       1 pv_controller.go:864] volume \"local-5wl2r\" entered phase \"Bound\"\nI0802 09:32:08.751767       1 pv_controller.go:967] volume \"local-5wl2r\" bound to claim \"provisioning-8260/pvc-thpjr\"\nI0802 09:32:08.763692       1 pv_controller.go:808] claim \"provisioning-8260/pvc-thpjr\" entered phase \"Bound\"\nI0802 09:32:08.766937       1 pv_controller.go:915] claim \"persistent-local-volumes-test-1592/pvc-lj95j\" bound to volume \"local-pvks5pn\"\nI0802 09:32:08.782518       1 pv_controller.go:864] volume \"local-pvks5pn\" entered phase \"Bound\"\nI0802 09:32:08.782548       1 pv_controller.go:967] volume \"local-pvks5pn\" bound to claim \"persistent-local-volumes-test-1592/pvc-lj95j\"\nI0802 09:32:08.790819       1 pv_controller.go:808] claim \"persistent-local-volumes-test-1592/pvc-lj95j\" entered phase \"Bound\"\nI0802 09:32:08.979955       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0c9068da7fc32fe35: error deleting EBS volume \"vol-0c9068da7fc32fe35\" since volume is currently attached to \"i-002cc620c967da679\"\nI0802 09:32:08.980637       1 event.go:291] \"Event occurred\" object=\"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0c9068da7fc32fe35\\\" since volume is currently attached to \\\"i-002cc620c967da679\\\"\"\nE0802 09:32:08.980668       1 goroutinemap.go:150] Operation for \"delete-pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546[b9318fe0-6d2d-4f1c-9b60-d99c474b1576]\" failed. No retries permitted until 2021-08-02 09:32:09.980212323 +0000 UTC m=+411.924778245 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0c9068da7fc32fe35\\\" since volume is currently attached to \\\"i-002cc620c967da679\\\"\"\nI0802 09:32:08.992301       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-2a/vol-0bc3f5c1db447bf80\nI0802 09:32:08.992328       1 pv_controller.go:1421] volume \"pvc-2e9e77e8-587b-4194-b4a6-4395f77435cd\" deleted\nI0802 09:32:08.999671       1 pv_controller_base.go:504] deletion of claim \"topology-5283/pvc-mj4xb\" was already processed\nI0802 09:32:09.523394       1 namespace_controller.go:185] Namespace has been deleted services-295\nE0802 09:32:09.616059       1 tokens_controller.go:262] error synchronizing serviceaccount metrics-grabber-7871/default: secrets \"default-token-z22vz\" is forbidden: unable to create new content in namespace metrics-grabber-7871 because it is being terminated\nI0802 09:32:09.863341       1 namespace_controller.go:185] Namespace has been deleted provisioning-11\nI0802 09:32:10.158728       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\") on node \"ip-172-20-56-163.ap-southeast-2.compute.internal\" \nI0802 09:32:10.161488       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\") on node \"ip-172-20-56-163.ap-southeast-2.compute.internal\" \nE0802 09:32:10.350259       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-7274/default: secrets \"default-token-hkcls\" is forbidden: unable to create new content in namespace kubectl-7274 because it is being terminated\nI0802 09:32:10.501130       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-651\nI0802 09:32:11.093237       1 namespace_controller.go:185] Namespace has been deleted container-probe-7609\nI0802 09:32:11.285536       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-2886/test-quota\nI0802 09:32:11.392911       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8940/pvc-v6sww\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8940\\\" or manually created by system administrator\"\nI0802 09:32:11.393170       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8940/pvc-v6sww\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8940\\\" or manually created by system administrator\"\nI0802 09:32:11.419767       1 pv_controller.go:864] volume \"pvc-3c7a18dd-e6a2-4ece-8b7f-a6dbc5ca282a\" entered phase \"Bound\"\nI0802 09:32:11.419793       1 pv_controller.go:967] volume \"pvc-3c7a18dd-e6a2-4ece-8b7f-a6dbc5ca282a\" bound to claim \"csi-mock-volumes-8940/pvc-v6sww\"\nI0802 09:32:11.424298       1 pv_controller.go:808] claim \"csi-mock-volumes-8940/pvc-v6sww\" entered phase \"Bound\"\nI0802 09:32:12.163938       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-3c7a18dd-e6a2-4ece-8b7f-a6dbc5ca282a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8940^4\") from node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:32:12.190001       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-3c7a18dd-e6a2-4ece-8b7f-a6dbc5ca282a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8940^4\") from node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:32:12.190337       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8940/pvc-volume-tester-w9xbg\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3c7a18dd-e6a2-4ece-8b7f-a6dbc5ca282a\\\" \"\nI0802 09:32:12.360140       1 namespace_controller.go:185] Namespace has been deleted ephemeral-2872\nI0802 09:32:12.442539       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-attacher-mgh8q\" objectUID=4bebb22f-e5e3-4869-921d-9af3bafe91a9 kind=\"EndpointSlice\" virtual=false\nI0802 09:32:12.629689       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.103.208).\nI0802 09:32:12.660229       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-attacher-cdd54845d\" objectUID=74386df2-49e4-42dc-8ebd-f7a1716621b7 kind=\"ControllerRevision\" virtual=false\nI0802 09:32:12.660271       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2872-6578/csi-hostpath-attacher\nI0802 09:32:12.660361       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-attacher-0\" objectUID=5da88bfa-dd7f-494f-a93c-e96a3da0a1d4 kind=\"Pod\" virtual=false\nI0802 09:32:12.830116       1 event.go:291] \"Event occurred\" object=\"volume-expand-8616-827/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0802 09:32:12.830639       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.103.208).\nI0802 09:32:12.939415       1 namespace_controller.go:185] Namespace has been deleted downward-api-2064\nI0802 09:32:12.996522       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-attacher-mgh8q\" objectUID=4bebb22f-e5e3-4869-921d-9af3bafe91a9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:32:12.996649       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-attacher-cdd54845d\" objectUID=74386df2-49e4-42dc-8ebd-f7a1716621b7 kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:32:12.997062       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-attacher-0\" objectUID=5da88bfa-dd7f-494f-a93c-e96a3da0a1d4 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:13.045369       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpathplugin-kzq8j\" objectUID=528570c6-9ae1-461a-947b-029363a08441 kind=\"EndpointSlice\" virtual=false\nI0802 09:32:13.048262       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpathplugin-kzq8j\" objectUID=528570c6-9ae1-461a-947b-029363a08441 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:32:13.128813       1 aws.go:2291] Waiting for volume \"vol-0c9068da7fc32fe35\" state: actual=detaching, desired=detached\nI0802 09:32:13.208392       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.147.83).\nI0802 09:32:13.242965       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpathplugin-dd45f967d\" objectUID=97cfa8ea-5d6d-4ffb-8aed-feb553f654e5 kind=\"ControllerRevision\" virtual=false\nI0802 09:32:13.243014       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2872-6578/csi-hostpathplugin\nI0802 09:32:13.243045       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpathplugin-0\" objectUID=e4c90f52-4f0a-4bf0-bcbc-b8a0725726cd kind=\"Pod\" virtual=false\nI0802 09:32:13.245238       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpathplugin-dd45f967d\" objectUID=97cfa8ea-5d6d-4ffb-8aed-feb553f654e5 kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:32:13.245441       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpathplugin-0\" objectUID=e4c90f52-4f0a-4bf0-bcbc-b8a0725726cd kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:13.256501       1 route_controller.go:294] set node ip-172-20-48-162.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:13.256501       1 route_controller.go:294] set node ip-172-20-35-97.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:13.256527       1 route_controller.go:294] set node ip-172-20-43-68.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:13.256536       1 route_controller.go:294] set node ip-172-20-47-13.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:13.256713       1 route_controller.go:294] set node ip-172-20-56-163.ap-southeast-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0802 09:32:13.370465       1 namespace_controller.go:185] Namespace has been deleted provisioning-5488\nI0802 09:32:13.411961       1 event.go:291] \"Event occurred\" object=\"volume-expand-8616-827/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0802 09:32:13.412402       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.147.83).\nI0802 09:32:13.436279       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-provisioner-qw7wh\" objectUID=55918399-06c7-4f71-ad9d-253a064cfe08 kind=\"EndpointSlice\" virtual=false\nI0802 09:32:13.438381       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-provisioner-qw7wh\" objectUID=55918399-06c7-4f71-ad9d-253a064cfe08 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:32:13.548097       1 pvc_protection_controller.go:291] PVC volumemode-2636/pvc-l54r9 is unused\nI0802 09:32:13.554072       1 pv_controller.go:638] volume \"local-hc7gp\" is released and reclaim policy \"Retain\" will be executed\nI0802 09:32:13.556999       1 pv_controller.go:864] volume \"local-hc7gp\" entered phase \"Released\"\nI0802 09:32:13.595993       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.16.155).\nI0802 09:32:13.635350       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.103.208).\nI0802 09:32:13.636273       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-provisioner-5b4f8c4cd8\" objectUID=bd2b6168-4f9c-453f-8f2a-3096d365822d kind=\"ControllerRevision\" virtual=false\nI0802 09:32:13.636436       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2872-6578/csi-hostpath-provisioner\nI0802 09:32:13.636466       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-provisioner-0\" objectUID=950fdf65-35a1-401a-bf06-ecf240cd4f2e kind=\"Pod\" virtual=false\nI0802 09:32:13.639441       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-provisioner-5b4f8c4cd8\" objectUID=bd2b6168-4f9c-453f-8f2a-3096d365822d kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:32:13.639604       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-provisioner-0\" objectUID=950fdf65-35a1-401a-bf06-ecf240cd4f2e kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:13.741534       1 pv_controller_base.go:504] deletion of claim \"volumemode-2636/pvc-l54r9\" was already processed\nI0802 09:32:13.796142       1 event.go:291] \"Event occurred\" object=\"volume-expand-8616-827/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0802 09:32:13.796472       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.16.155).\nI0802 09:32:13.830632       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-resizer-htvbk\" objectUID=7420799f-f313-4cbf-b2dc-4899517cdd3e kind=\"EndpointSlice\" virtual=false\nI0802 09:32:13.834488       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-resizer-htvbk\" objectUID=7420799f-f313-4cbf-b2dc-4899517cdd3e kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:32:13.983094       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.68.228).\nI0802 09:32:14.031946       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-resizer-7df8c9d65\" objectUID=df91c72e-f94f-4cea-b192-db1af7ba8b1f kind=\"ControllerRevision\" virtual=false\nI0802 09:32:14.033058       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2872-6578/csi-hostpath-resizer\nI0802 09:32:14.033108       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-resizer-0\" objectUID=ceb621e2-ae2c-428a-93ab-f8d42aa490c0 kind=\"Pod\" virtual=false\nI0802 09:32:14.035231       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-resizer-0\" objectUID=ceb621e2-ae2c-428a-93ab-f8d42aa490c0 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:14.035538       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-resizer-7df8c9d65\" objectUID=df91c72e-f94f-4cea-b192-db1af7ba8b1f kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:32:14.204894       1 event.go:291] \"Event occurred\" object=\"volume-expand-5557/awsb9ck4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0802 09:32:14.210942       1 pvc_protection_controller.go:291] PVC volume-expand-5557/awsb9ck4 is unused\nI0802 09:32:14.211440       1 event.go:291] \"Event occurred\" object=\"volume-expand-8616-827/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0802 09:32:14.211712       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.68.228).\nI0802 09:32:14.211914       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.147.83).\nI0802 09:32:14.243553       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-snapshotter-lch92\" objectUID=93fc8a58-cc5f-4604-b248-bcfb46d596bc kind=\"EndpointSlice\" virtual=false\nI0802 09:32:14.248911       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-snapshotter-lch92\" objectUID=93fc8a58-cc5f-4604-b248-bcfb46d596bc kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:32:14.361577       1 event.go:291] \"Event occurred\" object=\"cronjob-6503/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: concurrent-1627896420, status: Complete\"\nI0802 09:32:14.374614       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.206.78).\nE0802 09:32:14.395176       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0802 09:32:14.449436       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2872-6578/csi-hostpath-snapshotter\nI0802 09:32:14.449482       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-snapshotter-7d4ffd7cf5\" objectUID=8e66da0e-1b0e-427b-bc09-f0ee002a10bd kind=\"ControllerRevision\" virtual=false\nI0802 09:32:14.449492       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2872-6578/csi-hostpath-snapshotter-0\" objectUID=c13d2128-10fc-47e3-b9ce-a07bd15b3a60 kind=\"Pod\" virtual=false\nI0802 09:32:14.451466       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-snapshotter-7d4ffd7cf5\" objectUID=8e66da0e-1b0e-427b-bc09-f0ee002a10bd kind=\"ControllerRevision\" propagationPolicy=Background\nI0802 09:32:14.451523       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2872-6578/csi-hostpath-snapshotter-0\" objectUID=c13d2128-10fc-47e3-b9ce-a07bd15b3a60 kind=\"Pod\" propagationPolicy=Background\nI0802 09:32:14.549531       1 garbagecollector.go:471] \"Processing object\" object=\"services-9578/hairpin-test-6rg29\" objectUID=691205dd-61d5-4218-b284-d6baac84ea52 kind=\"EndpointSlice\" virtual=false\nI0802 09:32:14.551498       1 garbagecollector.go:580] \"Deleting object\" object=\"services-9578/hairpin-test-6rg29\" objectUID=691205dd-61d5-4218-b284-d6baac84ea52 kind=\"EndpointSlice\" propagationPolicy=Background\nI0802 09:32:14.579401       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.206.78).\nI0802 09:32:14.579940       1 event.go:291] \"Event occurred\" object=\"volume-expand-8616-827/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0802 09:32:14.679991       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-7871\nI0802 09:32:14.989382       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-8616-827/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.68.228).\nI0802 09:32:15.142258       1 event.go:291] \"Event occurred\" object=\"volume-expand-8616/csi-hostpathz66v6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-8616\\\" or manually created by system administrator\"\nI0802 09:32:15.142285       1 event.go:291] \"Event occurred\" object=\"volume-expand-8616/csi-hostpathz66v6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-8616\\\" or manually created by system administrator\"\nI0802 09:32:15.198935       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-08-02 09:31:39 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbj\",\n  InstanceId: \"i-002cc620c967da679\",\n  State: \"detaching\",\n  VolumeId: \"vol-0c9068da7fc32fe35\"\n}\nI0802 09:32:15.198991       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-5203d0e7-99dc-4dc1-9d55-3eda240ac546\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0c9068da7fc32fe35\") on node \"ip-172-20-35-97.ap-southeast-2.compute.internal\" \nI0802 09:32:15.402774       1 namespace_controller.go:185] Namespace has been deleted kubectl-7274\nI0802 09:32:15.646110       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"vol1\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-016a3609f5cd6ac92\") on node \"ip-172-20-56-163.ap-southeast-2.compute.internal\" \n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-43-68.ap-southeast-2.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-35-97.ap-southeast-2.compute.internal ====\nI0802 09:17:37.926789       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0802 09:17:37.927800       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0802 09:17:37.927816       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0802 09:17:37.927825       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0802 09:17:37.927834       1 flags.go:59] FLAG: --cleanup=\"false\"\nI0802 09:17:37.927839       1 flags.go:59] FLAG: --cleanup-ipvs=\"true\"\nI0802 09:17:37.927844       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0802 09:17:37.927850       1 flags.go:59] FLAG: --config=\"\"\nI0802 09:17:37.927855       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0802 09:17:37.927862       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0802 09:17:37.927868       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0802 09:17:37.927873       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0802 09:17:37.927878       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0802 09:17:37.927882       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0802 09:17:37.927889       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0802 09:17:37.927895       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0802 09:17:37.927901       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0802 09:17:37.927906       1 flags.go:59] FLAG: --help=\"false\"\nI0802 09:17:37.927911       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-35-97.ap-southeast-2.compute.internal\"\nI0802 09:17:37.927917       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0802 09:17:37.927921       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0802 09:17:37.927926       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0802 09:17:37.927931       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0802 09:17:37.927954       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0802 09:17:37.927958       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0802 09:17:37.927963       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0802 09:17:37.927983       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0802 09:17:37.927988       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0802 09:17:37.927992       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0802 09:17:37.928004       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0802 09:17:37.928008       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0802 09:17:37.928013       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0802 09:17:37.928018       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0802 09:17:37.928026       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0802 09:17:37.928031       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0802 09:17:37.928040       1 flags.go:59] FLAG: --log-dir=\"\"\nI0802 09:17:37.928045       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0802 09:17:37.928049       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0802 09:17:37.928054       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0802 09:17:37.928058       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0802 09:17:37.928063       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0802 09:17:37.928067       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io\"\nI0802 09:17:37.928072       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0802 09:17:37.928077       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0802 09:17:37.928081       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0802 09:17:37.928086       1 flags.go:59] FLAG: --one-output=\"false\"\nI0802 09:17:37.928091       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0802 09:17:37.928095       1 flags.go:59] FLAG: --profiling=\"false\"\nI0802 09:17:37.928099       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0802 09:17:37.928105       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0802 09:17:37.928110       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0802 09:17:37.928114       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0802 09:17:37.928118       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0802 09:17:37.928122       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0802 09:17:37.928127       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0802 09:17:37.928132       1 flags.go:59] FLAG: --v=\"2\"\nI0802 09:17:37.928137       1 flags.go:59] FLAG: --version=\"false\"\nI0802 09:17:37.928145       1 flags.go:59] FLAG: --vmodule=\"\"\nI0802 09:17:37.928149       1 flags.go:59] FLAG: --write-config-to=\"\"\nW0802 09:17:37.928157       1 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0802 09:17:37.928251       1 feature_gate.go:243] feature gates: &{map[]}\nI0802 09:17:37.928356       1 feature_gate.go:243] feature gates: &{map[]}\nI0802 09:17:38.078295       1 node.go:172] Successfully retrieved node IP: 172.20.35.97\nI0802 09:17:38.078384       1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.20.35.97), assume IPv4 operation\nW0802 09:17:38.174406       1 server_others.go:584] Unknown proxy mode \"\", assuming iptables proxy\nI0802 09:17:38.174514       1 server_others.go:182] DetectLocalMode: 'ClusterCIDR'\nI0802 09:17:38.174531       1 server_others.go:185] Using iptables Proxier.\nI0802 09:17:38.174583       1 utils.go:321] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0802 09:17:38.174631       1 proxier.go:287] iptables(IPv4) masquerade mark: 0x00004000\nI0802 09:17:38.174658       1 proxier.go:334] iptables(IPv4) sync params: minSyncPeriod=1s, syncPeriod=30s, burstSyncs=2\nI0802 09:17:38.174688       1 proxier.go:346] iptables(IPv4) supports --random-fully\nI0802 09:17:38.174824       1 server.go:650] Version: v1.20.9\nI0802 09:17:38.175192       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI0802 09:17:38.175215       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0802 09:17:38.175324       1 mount_linux.go:188] Detected OS without systemd\nI0802 09:17:38.175511       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI0802 09:17:38.180140       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0802 09:17:38.180202       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0802 09:17:38.183339       1 config.go:315] Starting service config controller\nI0802 09:17:38.183354       1 shared_informer.go:240] Waiting for caches to sync for service config\nI0802 09:17:38.183390       1 config.go:224] Starting endpoint slice config controller\nI0802 09:17:38.183394       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0802 09:17:38.183591       1 reflector.go:219] Starting reflector *v1.Service (15m0s) from k8s.io/client-go/informers/factory.go:134\nI0802 09:17:38.183856       1 reflector.go:219] Starting reflector *v1beta1.EndpointSlice (15m0s) from k8s.io/client-go/informers/factory.go:134\nI0802 09:17:38.191496       1 service.go:275] Service default/kubernetes updated: 1 ports\nI0802 09:17:38.191594       1 service.go:275] Service kube-system/kube-dns updated: 3 ports\nI0802 09:17:38.283523       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0802 09:17:38.300363       1 proxier.go:818] Not syncing iptables until Services and Endpoints have been received from master\nI0802 09:17:38.300188       1 shared_informer.go:247] Caches are synced for service config \nI0802 09:17:38.300711       1 service.go:390] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0802 09:17:38.300733       1 service.go:390] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0802 09:17:38.300744       1 service.go:390] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0802 09:17:38.300754       1 service.go:390] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0802 09:17:38.300898       1 proxier.go:858] Stale udp service kube-system/kube-dns:dns -> 100.64.0.10\nI0802 09:17:38.300923       1 proxier.go:871] Syncing iptables rules\nI0802 09:17:38.400462       1 proxier.go:826] syncProxyRules took 100.056168ms\nI0802 09:19:12.792727       1 service.go:275] Service crd-webhook-9522/e2e-test-crd-conversion-webhook updated: 1 ports\nI0802 09:19:12.793176       1 service.go:390] Adding new service port \"crd-webhook-9522/e2e-test-crd-conversion-webhook\" at 100.68.250.245:9443/TCP\nI0802 09:19:12.793226       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:12.835397       1 proxier.go:826] syncProxyRules took 42.61176ms\nI0802 09:19:12.835767       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:12.870357       1 proxier.go:826] syncProxyRules took 34.925171ms\nI0802 09:19:17.350239       1 service.go:275] Service crd-webhook-9522/e2e-test-crd-conversion-webhook updated: 0 ports\nI0802 09:19:17.350636       1 service.go:415] Removing service port \"crd-webhook-9522/e2e-test-crd-conversion-webhook\"\nI0802 09:19:17.350678       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:17.385583       1 proxier.go:826] syncProxyRules took 35.310055ms\nI0802 09:19:17.386104       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:17.426871       1 proxier.go:826] syncProxyRules took 41.181778ms\nI0802 09:19:28.763951       1 service.go:275] Service webhook-5205/e2e-test-webhook updated: 1 ports\nI0802 09:19:28.764368       1 service.go:390] Adding new service port \"webhook-5205/e2e-test-webhook\" at 100.65.222.225:8443/TCP\nI0802 09:19:28.764415       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:28.790128       1 proxier.go:826] syncProxyRules took 26.122803ms\nI0802 09:19:28.790615       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:28.815850       1 proxier.go:826] syncProxyRules took 25.675175ms\nI0802 09:19:32.378450       1 service.go:275] Service webhook-5205/e2e-test-webhook updated: 0 ports\nI0802 09:19:32.379090       1 service.go:415] Removing service port \"webhook-5205/e2e-test-webhook\"\nI0802 09:19:32.379143       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:32.417555       1 proxier.go:826] syncProxyRules took 38.873084ms\nI0802 09:19:32.417905       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:32.474954       1 proxier.go:826] syncProxyRules took 57.3576ms\nI0802 09:19:32.516134       1 service.go:275] Service webhook-4678/e2e-test-webhook updated: 1 ports\nI0802 09:19:33.475520       1 service.go:390] Adding new service port \"webhook-4678/e2e-test-webhook\" at 100.68.62.124:8443/TCP\nI0802 09:19:33.475719       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:33.500213       1 proxier.go:826] syncProxyRules took 25.091545ms\nI0802 09:19:36.800952       1 service.go:275] Service webhook-4678/e2e-test-webhook updated: 0 ports\nI0802 09:19:36.801415       1 service.go:415] Removing service port \"webhook-4678/e2e-test-webhook\"\nI0802 09:19:36.801464       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:36.826781       1 proxier.go:826] syncProxyRules took 25.766927ms\nI0802 09:19:37.266258       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:37.290867       1 proxier.go:826] syncProxyRules took 24.989262ms\nI0802 09:19:52.226994       1 service.go:275] Service volume-2336-5573/csi-hostpath-attacher updated: 1 ports\nI0802 09:19:52.227367       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpath-attacher:dummy\" at 100.65.192.59:12345/TCP\nI0802 09:19:52.227412       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:52.254100       1 proxier.go:826] syncProxyRules took 27.06623ms\nI0802 09:19:52.254529       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:52.286580       1 proxier.go:826] syncProxyRules took 32.445517ms\nI0802 09:19:52.801491       1 service.go:275] Service volume-2336-5573/csi-hostpathplugin updated: 1 ports\nI0802 09:19:53.197161       1 service.go:275] Service volume-2336-5573/csi-hostpath-provisioner updated: 1 ports\nI0802 09:19:53.309049       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpathplugin:dummy\" at 100.65.252.165:12345/TCP\nI0802 09:19:53.309081       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpath-provisioner:dummy\" at 100.71.131.80:12345/TCP\nI0802 09:19:53.309129       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:53.349743       1 proxier.go:826] syncProxyRules took 62.112515ms\nI0802 09:19:53.580457       1 service.go:275] Service volume-2336-5573/csi-hostpath-resizer updated: 1 ports\nI0802 09:19:53.974067       1 service.go:275] Service volume-2336-5573/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:19:54.350255       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpath-snapshotter:dummy\" at 100.69.168.99:12345/TCP\nI0802 09:19:54.350286       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpath-resizer:dummy\" at 100.70.190.18:12345/TCP\nI0802 09:19:54.350352       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:54.379900       1 proxier.go:826] syncProxyRules took 30.035727ms\nI0802 09:20:01.635749       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:01.664064       1 proxier.go:826] syncProxyRules took 28.757468ms\nI0802 09:20:05.026544       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:05.054621       1 proxier.go:826] syncProxyRules took 28.481148ms\nI0802 09:20:06.644742       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:06.674407       1 proxier.go:826] syncProxyRules took 30.070744ms\nI0802 09:20:11.344706       1 service.go:275] Service conntrack-8962/svc-udp updated: 1 ports\nI0802 09:20:11.345101       1 service.go:390] Adding new service port \"conntrack-8962/svc-udp:udp\" at 100.69.106.222:80/UDP\nI0802 09:20:11.345152       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:11.371424       1 proxier.go:826] syncProxyRules took 26.67517ms\nI0802 09:20:11.371747       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:11.398045       1 proxier.go:826] syncProxyRules took 26.587641ms\nI0802 09:20:12.758375       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:12.784693       1 proxier.go:826] syncProxyRules took 26.653117ms\nI0802 09:20:13.530944       1 service.go:275] Service webhook-5840/e2e-test-webhook updated: 1 ports\nI0802 09:20:13.531372       1 service.go:390] Adding new service port \"webhook-5840/e2e-test-webhook\" at 100.71.172.43:8443/TCP\nI0802 09:20:13.531429       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:13.562047       1 proxier.go:826] syncProxyRules took 31.051044ms\nI0802 09:20:14.562495       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:14.588925       1 proxier.go:826] syncProxyRules took 26.771815ms\nI0802 09:20:16.007614       1 proxier.go:858] Stale udp service conntrack-8962/svc-udp:udp -> 100.69.106.222\nI0802 09:20:16.007653       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:16.037026       1 proxier.go:826] syncProxyRules took 29.724757ms\nI0802 09:20:17.195491       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-attacher updated: 1 ports\nI0802 09:20:17.195848       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpath-attacher:dummy\" at 100.69.5.207:12345/TCP\nI0802 09:20:17.195903       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:17.225534       1 proxier.go:826] syncProxyRules took 30.00428ms\nI0802 09:20:17.393741       1 service.go:275] Service services-7758/nodeport-reuse updated: 1 ports\nI0802 09:20:17.394122       1 service.go:390] Adding new service port \"services-7758/nodeport-reuse\" at 100.64.24.239:80/TCP\nI0802 09:20:17.394187       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:17.424510       1 service.go:275] Service webhook-5840/e2e-test-webhook updated: 0 ports\nI0802 09:20:17.425805       1 proxier.go:1715] Opened local port \"nodePort for services-7758/nodeport-reuse\" (:31870/tcp)\nI0802 09:20:17.429678       1 proxier.go:826] syncProxyRules took 35.897905ms\nI0802 09:20:17.586764       1 service.go:275] Service services-7758/nodeport-reuse updated: 0 ports\nI0802 09:20:17.772533       1 service.go:275] Service ephemeral-2112-9083/csi-hostpathplugin updated: 1 ports\nI0802 09:20:18.159366       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-provisioner updated: 1 ports\nI0802 09:20:18.430172       1 service.go:415] Removing service port \"webhook-5840/e2e-test-webhook\"\nI0802 09:20:18.430208       1 service.go:415] Removing service port \"services-7758/nodeport-reuse\"\nI0802 09:20:18.430227       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpathplugin:dummy\" at 100.68.89.107:12345/TCP\nI0802 09:20:18.430241       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpath-provisioner:dummy\" at 100.71.186.96:12345/TCP\nI0802 09:20:18.430314       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:18.474129       1 proxier.go:826] syncProxyRules took 44.321944ms\nI0802 09:20:18.545660       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-resizer updated: 1 ports\nI0802 09:20:18.932102       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:20:19.474738       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpath-resizer:dummy\" at 100.68.254.250:12345/TCP\nI0802 09:20:19.474783       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpath-snapshotter:dummy\" at 100.71.116.61:12345/TCP\nI0802 09:20:19.474852       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:19.511801       1 proxier.go:826] syncProxyRules took 37.531602ms\nI0802 09:20:20.512519       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:20.559094       1 proxier.go:826] syncProxyRules took 47.081352ms\nI0802 09:20:22.313364       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:22.339706       1 proxier.go:826] syncProxyRules took 26.775785ms\nI0802 09:20:22.453553       1 service.go:275] Service services-7758/nodeport-reuse updated: 1 ports\nI0802 09:20:22.454102       1 service.go:390] Adding new service port \"services-7758/nodeport-reuse\" at 100.68.238.177:80/TCP\nI0802 09:20:22.454186       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:22.476861       1 proxier.go:1715] Opened local port \"nodePort for services-7758/nodeport-reuse\" (:31870/tcp)\nI0802 09:20:22.480853       1 proxier.go:826] syncProxyRules took 27.263338ms\nI0802 09:20:22.645669       1 service.go:275] Service services-7758/nodeport-reuse updated: 0 ports\nI0802 09:20:23.481542       1 service.go:415] Removing service port \"services-7758/nodeport-reuse\"\nI0802 09:20:23.481632       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:23.540204       1 proxier.go:826] syncProxyRules took 59.094581ms\nI0802 09:20:24.778815       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:24.818076       1 proxier.go:826] syncProxyRules took 39.708998ms\nI0802 09:20:26.223749       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:26.273174       1 proxier.go:826] syncProxyRules took 50.433994ms\nI0802 09:20:29.728485       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:29.782312       1 proxier.go:826] syncProxyRules took 54.383573ms\nI0802 09:20:33.889002       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:33.915617       1 proxier.go:826] syncProxyRules took 27.069519ms\nI0802 09:20:37.022263       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:37.062870       1 proxier.go:826] syncProxyRules took 41.165331ms\nI0802 09:20:45.180262       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-attacher updated: 1 ports\nI0802 09:20:45.180808       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpath-attacher:dummy\" at 100.70.25.117:12345/TCP\nI0802 09:20:45.180879       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:45.220059       1 proxier.go:826] syncProxyRules took 39.760875ms\nI0802 09:20:45.220706       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:45.251795       1 proxier.go:826] syncProxyRules took 31.633168ms\nI0802 09:20:45.761506       1 service.go:275] Service volume-expand-6275-6517/csi-hostpathplugin updated: 1 ports\nI0802 09:20:46.151706       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-provisioner updated: 1 ports\nI0802 09:20:46.252398       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpathplugin:dummy\" at 100.66.211.72:12345/TCP\nI0802 09:20:46.252430       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpath-provisioner:dummy\" at 100.68.28.63:12345/TCP\nI0802 09:20:46.252496       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:46.289498       1 proxier.go:826] syncProxyRules took 37.500367ms\nI0802 09:20:46.533872       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-resizer updated: 1 ports\nI0802 09:20:46.917328       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:20:47.290402       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpath-resizer:dummy\" at 100.69.212.248:12345/TCP\nI0802 09:20:47.290435       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpath-snapshotter:dummy\" at 100.64.49.220:12345/TCP\nI0802 09:20:47.290523       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:47.321091       1 proxier.go:826] syncProxyRules took 31.2167ms\nI0802 09:20:49.996935       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:50.021440       1 service.go:275] Service conntrack-8962/svc-udp updated: 0 ports\nI0802 09:20:50.028516       1 proxier.go:826] syncProxyRules took 32.134348ms\nI0802 09:20:50.029098       1 service.go:415] Removing service port \"conntrack-8962/svc-udp:udp\"\nI0802 09:20:50.029190       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:50.059503       1 proxier.go:826] syncProxyRules took 30.95467ms\nI0802 09:20:51.000797       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:51.028438       1 proxier.go:826] syncProxyRules took 28.162903ms\nI0802 09:20:52.029146       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:52.058763       1 proxier.go:826] syncProxyRules took 30.190224ms\nI0802 09:20:52.619604       1 service.go:275] Service kubectl-6830/rm2 updated: 1 ports\nI0802 09:20:53.059494       1 service.go:390] Adding new service port \"kubectl-6830/rm2\" at 100.67.16.37:1234/TCP\nI0802 09:20:53.059609       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:53.097049       1 proxier.go:826] syncProxyRules took 38.15599ms\nI0802 09:20:56.067053       1 service.go:275] Service kubectl-6830/rm3 updated: 1 ports\nI0802 09:20:56.067627       1 service.go:390] Adding new service port \"kubectl-6830/rm3\" at 100.67.68.194:2345/TCP\nI0802 09:20:56.067703       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:56.097610       1 proxier.go:826] syncProxyRules took 30.51175ms\nI0802 09:20:56.098183       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:56.127439       1 proxier.go:826] syncProxyRules took 29.793653ms\nI0802 09:21:04.100334       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:04.118191       1 service.go:275] Service kubectl-6830/rm2 updated: 0 ports\nI0802 09:21:04.131020       1 service.go:275] Service kubectl-6830/rm3 updated: 0 ports\nI0802 09:21:04.133512       1 proxier.go:826] syncProxyRules took 34.415987ms\nI0802 09:21:04.134120       1 service.go:415] Removing service port \"kubectl-6830/rm2\"\nI0802 09:21:04.134145       1 service.go:415] Removing service port \"kubectl-6830/rm3\"\nI0802 09:21:04.134237       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:04.187451       1 proxier.go:826] syncProxyRules took 53.904447ms\nI0802 09:21:05.188239       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:05.240219       1 proxier.go:826] syncProxyRules took 52.630161ms\nI0802 09:21:11.744360       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-attacher updated: 1 ports\nI0802 09:21:11.744916       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpath-attacher:dummy\" at 100.66.46.66:12345/TCP\nI0802 09:21:11.745058       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:11.775379       1 proxier.go:826] syncProxyRules took 30.973838ms\nI0802 09:21:11.775912       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:11.804096       1 proxier.go:826] syncProxyRules took 28.677499ms\nI0802 09:21:12.328482       1 service.go:275] Service provisioning-2246-2145/csi-hostpathplugin updated: 1 ports\nI0802 09:21:12.750229       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-provisioner updated: 1 ports\nI0802 09:21:12.750842       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpathplugin:dummy\" at 100.70.43.46:12345/TCP\nI0802 09:21:12.750892       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpath-provisioner:dummy\" at 100.65.113.83:12345/TCP\nI0802 09:21:12.750986       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:12.804812       1 proxier.go:826] syncProxyRules took 54.509752ms\nI0802 09:21:13.136992       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-resizer updated: 1 ports\nI0802 09:21:13.528119       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:21:13.805725       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpath-resizer:dummy\" at 100.71.47.96:12345/TCP\nI0802 09:21:13.805758       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpath-snapshotter:dummy\" at 100.66.113.34:12345/TCP\nI0802 09:21:13.805841       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:13.895801       1 proxier.go:826] syncProxyRules took 90.660347ms\nI0802 09:21:15.779158       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-attacher updated: 0 ports\nI0802 09:21:15.779738       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpath-attacher:dummy\"\nI0802 09:21:15.779804       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:15.823369       1 proxier.go:826] syncProxyRules took 44.169202ms\nI0802 09:21:15.825064       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:15.855640       1 proxier.go:826] syncProxyRules took 32.231384ms\nI0802 09:21:16.377517       1 service.go:275] Service ephemeral-2112-9083/csi-hostpathplugin updated: 0 ports\nI0802 09:21:16.775042       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-provisioner updated: 0 ports\nI0802 09:21:16.790677       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpathplugin:dummy\"\nI0802 09:21:16.790718       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpath-provisioner:dummy\"\nI0802 09:21:16.790807       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:16.900600       1 proxier.go:826] syncProxyRules took 110.628918ms\nI0802 09:21:17.185918       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-resizer updated: 0 ports\nI0802 09:21:17.589187       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:21:17.901245       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpath-resizer:dummy\"\nI0802 09:21:17.901279       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpath-snapshotter:dummy\"\nI0802 09:21:17.901353       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:17.929130       1 proxier.go:826] syncProxyRules took 28.361887ms\nI0802 09:21:18.906069       1 service.go:275] Service volume-6703-8059/csi-hostpath-attacher updated: 1 ports\nI0802 09:21:18.906581       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpath-attacher:dummy\" at 100.67.115.199:12345/TCP\nI0802 09:21:18.906644       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:18.937638       1 proxier.go:826] syncProxyRules took 31.531525ms\nI0802 09:21:19.491637       1 service.go:275] Service volume-6703-8059/csi-hostpathplugin updated: 1 ports\nI0802 09:21:19.876273       1 service.go:275] Service volume-6703-8059/csi-hostpath-provisioner updated: 1 ports\nI0802 09:21:19.876804       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpathplugin:dummy\" at 100.67.227.1:12345/TCP\nI0802 09:21:19.876825       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpath-provisioner:dummy\" at 100.70.131.227:12345/TCP\nI0802 09:21:19.876886       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:19.916582       1 proxier.go:826] syncProxyRules took 40.271139ms\nI0802 09:21:20.276190       1 service.go:275] Service volume-6703-8059/csi-hostpath-resizer updated: 1 ports\nI0802 09:21:20.662248       1 service.go:275] Service volume-6703-8059/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:21:20.917222       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpath-resizer:dummy\" at 100.67.189.41:12345/TCP\nI0802 09:21:20.917252       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpath-snapshotter:dummy\" at 100.69.0.142:12345/TCP\nI0802 09:21:20.917332       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:20.960445       1 proxier.go:826] syncProxyRules took 43.729214ms\nI0802 09:21:24.904621       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:24.954254       1 proxier.go:826] syncProxyRules took 50.143302ms\nI0802 09:21:25.795831       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:25.848769       1 proxier.go:826] syncProxyRules took 53.529329ms\nI0802 09:21:27.196888       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:27.252612       1 proxier.go:826] syncProxyRules took 56.237172ms\nI0802 09:21:27.992680       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:28.022164       1 proxier.go:826] syncProxyRules took 29.940177ms\nI0802 09:21:29.022770       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:29.051621       1 proxier.go:826] syncProxyRules took 29.331655ms\nI0802 09:21:29.646607       1 service.go:275] Service volume-2336-5573/csi-hostpath-attacher updated: 0 ports\nI0802 09:21:29.647131       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpath-attacher:dummy\"\nI0802 09:21:29.647204       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:29.686125       1 proxier.go:826] syncProxyRules took 39.475126ms\nI0802 09:21:30.257096       1 service.go:275] Service volume-2336-5573/csi-hostpathplugin updated: 0 ports\nI0802 09:21:30.257672       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpathplugin:dummy\"\nI0802 09:21:30.257759       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:30.311050       1 proxier.go:826] syncProxyRules took 53.908885ms\nI0802 09:21:30.658963       1 service.go:275] Service volume-2336-5573/csi-hostpath-provisioner updated: 0 ports\nI0802 09:21:31.049238       1 service.go:275] Service volume-2336-5573/csi-hostpath-resizer updated: 0 ports\nI0802 09:21:31.311673       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpath-provisioner:dummy\"\nI0802 09:21:31.311712       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpath-resizer:dummy\"\nI0802 09:21:31.311795       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:31.340131       1 proxier.go:826] syncProxyRules took 28.908516ms\nI0802 09:21:31.439015       1 service.go:275] Service volume-2336-5573/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:21:32.340685       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpath-snapshotter:dummy\"\nI0802 09:21:32.340808       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:32.370402       1 proxier.go:826] syncProxyRules took 30.124791ms\nI0802 09:21:32.572689       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-attacher updated: 0 ports\nI0802 09:21:33.150344       1 service.go:275] Service volume-expand-6275-6517/csi-hostpathplugin updated: 0 ports\nI0802 09:21:33.370918       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpath-attacher:dummy\"\nI0802 09:21:33.370950       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpathplugin:dummy\"\nI0802 09:21:33.371043       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:33.403801       1 proxier.go:826] syncProxyRules took 33.267262ms\nI0802 09:21:33.541226       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-provisioner updated: 0 ports\nI0802 09:21:33.934244       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-resizer updated: 0 ports\nI0802 09:21:34.323376       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:21:34.323837       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpath-provisioner:dummy\"\nI0802 09:21:34.323868       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpath-resizer:dummy\"\nI0802 09:21:34.323879       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpath-snapshotter:dummy\"\nI0802 09:21:34.323956       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:34.351186       1 proxier.go:826] syncProxyRules took 27.773442ms\nI0802 09:21:35.351791       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:35.383857       1 proxier.go:826] syncProxyRules took 32.515807ms\nI0802 09:21:46.767032       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:46.813714       1 proxier.go:826] syncProxyRules took 47.203803ms\nI0802 09:22:01.417172       1 service.go:275] Service services-6870/nodeport-update-service updated: 1 ports\nI0802 09:22:01.417660       1 service.go:390] Adding new service port \"services-6870/nodeport-update-service\" at 100.70.108.206:80/TCP\nI0802 09:22:01.417720       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:01.458436       1 proxier.go:826] syncProxyRules took 41.226934ms\nI0802 09:22:01.459006       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:01.496613       1 proxier.go:826] syncProxyRules took 38.140779ms\nI0802 09:22:01.797852       1 service.go:275] Service services-6870/nodeport-update-service updated: 1 ports\nI0802 09:22:02.497304       1 service.go:390] Adding new service port \"services-6870/nodeport-update-service:tcp-port\" at 100.70.108.206:80/TCP\nI0802 09:22:02.497331       1 service.go:415] Removing service port \"services-6870/nodeport-update-service\"\nI0802 09:22:02.497392       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:02.521947       1 proxier.go:1715] Opened local port \"nodePort for services-6870/nodeport-update-service:tcp-port\" (:32759/tcp)\nI0802 09:22:02.526117       1 proxier.go:826] syncProxyRules took 29.375774ms\nI0802 09:22:04.335766       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:04.372069       1 proxier.go:826] syncProxyRules took 36.825286ms\nI0802 09:22:06.012651       1 service.go:275] Service webhook-6750/e2e-test-webhook updated: 1 ports\nI0802 09:22:06.013170       1 service.go:390] Adding new service port \"webhook-6750/e2e-test-webhook\" at 100.71.154.99:8443/TCP\nI0802 09:22:06.013237       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:06.043324       1 proxier.go:826] syncProxyRules took 30.632208ms\nI0802 09:22:06.043884       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:06.072863       1 proxier.go:826] syncProxyRules took 29.499988ms\nI0802 09:22:07.559865       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:07.595304       1 proxier.go:826] syncProxyRules took 35.96247ms\nI0802 09:22:09.985569       1 service.go:275] Service webhook-6750/e2e-test-webhook updated: 0 ports\nI0802 09:22:09.986143       1 service.go:415] Removing service port \"webhook-6750/e2e-test-webhook\"\nI0802 09:22:09.986208       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:10.086072       1 proxier.go:826] syncProxyRules took 100.464867ms\nI0802 09:22:10.503790       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:10.534208       1 proxier.go:826] syncProxyRules took 30.950927ms\nI0802 09:22:18.508767       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 1 ports\nI0802 09:22:18.509385       1 service.go:390] Adding new service port \"deployment-4314/test-rolling-update-with-lb\" at 100.71.104.248:80/TCP\nI0802 09:22:18.509460       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:18.532581       1 proxier.go:1715] Opened local port \"nodePort for deployment-4314/test-rolling-update-with-lb\" (:31127/tcp)\nI0802 09:22:18.537176       1 service_health.go:98] Opening healthcheck \"deployment-4314/test-rolling-update-with-lb\" on port 31777\nI0802 09:22:18.537371       1 proxier.go:826] syncProxyRules took 28.564231ms\nI0802 09:22:18.537880       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:18.567983       1 proxier.go:826] syncProxyRules took 30.566087ms\nI0802 09:22:19.757586       1 service.go:275] Service dns-7425/test-service-2 updated: 1 ports\nI0802 09:22:19.758059       1 service.go:390] Adding new service port \"dns-7425/test-service-2:http\" at 100.70.0.153:80/TCP\nI0802 09:22:19.758134       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:19.787222       1 proxier.go:826] syncProxyRules took 29.595299ms\nI0802 09:22:20.677330       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 1 ports\nI0802 09:22:20.677767       1 service.go:392] Updating existing service port \"deployment-4314/test-rolling-update-with-lb\" at 100.71.104.248:80/TCP\nI0802 09:22:20.677832       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:20.710070       1 proxier.go:826] syncProxyRules took 32.698907ms\nI0802 09:22:21.302412       1 service.go:275] Service services-6002/nodeport-test updated: 1 ports\nI0802 09:22:21.710635       1 service.go:390] Adding new service port \"services-6002/nodeport-test:http\" at 100.64.32.143:80/TCP\nI0802 09:22:21.710720       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:21.734696       1 proxier.go:1715] Opened local port \"nodePort for services-6002/nodeport-test:http\" (:32041/tcp)\nI0802 09:22:21.739364       1 proxier.go:826] syncProxyRules took 29.156938ms\nI0802 09:22:23.138728       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:23.169763       1 proxier.go:826] syncProxyRules took 31.56259ms\nI0802 09:22:24.171547       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:24.203378       1 proxier.go:826] syncProxyRules took 33.34749ms\nI0802 09:22:24.255359       1 service.go:275] Service services-6870/nodeport-update-service updated: 2 ports\nI0802 09:22:24.544905       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-attacher updated: 0 ports\nI0802 09:22:24.545392       1 service.go:392] Updating existing service port \"services-6870/nodeport-update-service:tcp-port\" at 100.70.108.206:80/TCP\nI0802 09:22:24.545528       1 service.go:390] Adding new service port \"services-6870/nodeport-update-service:udp-port\" at 100.70.108.206:80/UDP\nI0802 09:22:24.545540       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpath-attacher:dummy\"\nI0802 09:22:24.545678       1 proxier.go:858] Stale udp service services-6870/nodeport-update-service:udp-port -> 100.70.108.206\nI0802 09:22:24.545742       1 proxier.go:865] Stale udp service NodePort services-6870/nodeport-update-service:udp-port -> 31486\nI0802 09:22:24.545772       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:24.571829       1 proxier.go:1715] Opened local port \"nodePort for services-6870/nodeport-update-service:tcp-port\" (:31290/tcp)\nI0802 09:22:24.572099       1 proxier.go:1715] Opened local port \"nodePort for services-6870/nodeport-update-service:udp-port\" (:31486/udp)\nI0802 09:22:24.582350       1 proxier.go:826] syncProxyRules took 37.411015ms\nI0802 09:22:25.169540       1 service.go:275] Service provisioning-2246-2145/csi-hostpathplugin updated: 0 ports\nI0802 09:22:25.485919       1 service.go:275] Service webhook-7979/e2e-test-webhook updated: 1 ports\nI0802 09:22:25.570392       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-provisioner updated: 0 ports\nI0802 09:22:25.570847       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpathplugin:dummy\"\nI0802 09:22:25.570878       1 service.go:390] Adding new service port \"webhook-7979/e2e-test-webhook\" at 100.69.140.216:8443/TCP\nI0802 09:22:25.570885       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpath-provisioner:dummy\"\nI0802 09:22:25.570955       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:25.603545       1 proxier.go:826] syncProxyRules took 33.113209ms\nI0802 09:22:25.973030       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-resizer updated: 0 ports\nI0802 09:22:26.370765       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:22:26.604136       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpath-resizer:dummy\"\nI0802 09:22:26.604168       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpath-snapshotter:dummy\"\nI0802 09:22:26.604272       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:26.618554       1 service.go:275] Service webhook-7274/e2e-test-webhook updated: 1 ports\nI0802 09:22:26.637183       1 proxier.go:826] syncProxyRules took 33.494471ms\nI0802 09:22:27.637721       1 service.go:390] Adding new service port \"webhook-7274/e2e-test-webhook\" at 100.64.174.107:8443/TCP\nI0802 09:22:27.637817       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:27.683550       1 proxier.go:826] syncProxyRules took 46.270942ms\nI0802 09:22:29.361960       1 service.go:275] Service webhook-7274/e2e-test-webhook updated: 0 ports\nI0802 09:22:29.362504       1 service.go:415] Removing service port \"webhook-7274/e2e-test-webhook\"\nI0802 09:22:29.362580       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:29.393560       1 proxier.go:826] syncProxyRules took 31.53582ms\nI0802 09:22:29.599603       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:29.634298       1 proxier.go:826] syncProxyRules took 35.288988ms\nI0802 09:22:31.836240       1 service.go:275] Service webhook-7979/e2e-test-webhook updated: 0 ports\nI0802 09:22:31.840837       1 service.go:415] Removing service port \"webhook-7979/e2e-test-webhook\"\nI0802 09:22:31.841375       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:31.896765       1 proxier.go:826] syncProxyRules took 60.482216ms\nI0802 09:22:31.897307       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:31.958087       1 proxier.go:826] syncProxyRules took 61.274803ms\nI0802 09:22:36.136846       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:36.168403       1 proxier.go:826] syncProxyRules took 32.231049ms\nI0802 09:22:39.002782       1 service.go:275] Service endpointslice-7693/example-int-port updated: 1 ports\nI0802 09:22:39.003246       1 service.go:390] Adding new service port \"endpointslice-7693/example-int-port:example\" at 100.69.153.56:80/TCP\nI0802 09:22:39.003322       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:39.031153       1 proxier.go:826] syncProxyRules took 28.335827ms\nI0802 09:22:39.031605       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:39.064525       1 proxier.go:826] syncProxyRules took 33.319853ms\nI0802 09:22:39.195915       1 service.go:275] Service endpointslice-7693/example-named-port updated: 1 ports\nI0802 09:22:39.388056       1 service.go:275] Service endpointslice-7693/example-no-match updated: 1 ports\nI0802 09:22:40.065212       1 service.go:390] Adding new service port \"endpointslice-7693/example-named-port:http\" at 100.70.243.177:80/TCP\nI0802 09:22:40.065246       1 service.go:390] Adding new service port \"endpointslice-7693/example-no-match:example-no-match\" at 100.71.66.81:80/TCP\nI0802 09:22:40.065329       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:40.094047       1 proxier.go:826] syncProxyRules took 29.366557ms\nI0802 09:22:41.094939       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:41.142418       1 proxier.go:826] syncProxyRules took 48.240933ms\nI0802 09:22:43.512404       1 service.go:275] Service webhook-6320/e2e-test-webhook updated: 1 ports\nI0802 09:22:43.512873       1 service.go:390] Adding new service port \"webhook-6320/e2e-test-webhook\" at 100.66.87.153:8443/TCP\nI0802 09:22:43.512945       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:43.541728       1 proxier.go:826] syncProxyRules took 29.29029ms\nI0802 09:22:43.542327       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:43.570487       1 proxier.go:826] syncProxyRules took 28.72266ms\nI0802 09:22:45.941257       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:45.981629       1 service.go:275] Service services-6002/nodeport-test updated: 0 ports\nI0802 09:22:45.987261       1 proxier.go:826] syncProxyRules took 46.528467ms\nI0802 09:22:45.987790       1 service.go:415] Removing service port \"services-6002/nodeport-test:http\"\nI0802 09:22:45.987870       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:46.058352       1 proxier.go:826] syncProxyRules took 71.057627ms\nI0802 09:22:46.381563       1 service.go:275] Service volume-6703-8059/csi-hostpath-attacher updated: 0 ports\nI0802 09:22:46.965264       1 service.go:275] Service volume-6703-8059/csi-hostpathplugin updated: 0 ports\nI0802 09:22:46.965770       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpath-attacher:dummy\"\nI0802 09:22:46.965794       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpathplugin:dummy\"\nI0802 09:22:46.965872       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:46.997437       1 proxier.go:826] syncProxyRules took 32.131644ms\nI0802 09:22:47.369214       1 service.go:275] Service volume-6703-8059/csi-hostpath-provisioner updated: 0 ports\nI0802 09:22:47.760002       1 service.go:275] Service volume-6703-8059/csi-hostpath-resizer updated: 0 ports\nI0802 09:22:47.998089       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpath-provisioner:dummy\"\nI0802 09:22:47.998125       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpath-resizer:dummy\"\nI0802 09:22:47.998211       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:48.037994       1 proxier.go:826] syncProxyRules took 40.391455ms\nI0802 09:22:48.158283       1 service.go:275] Service volume-6703-8059/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:22:48.632677       1 service.go:275] Service webhook-6320/e2e-test-webhook updated: 0 ports\nI0802 09:22:49.038635       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpath-snapshotter:dummy\"\nI0802 09:22:49.038668       1 service.go:415] Removing service port \"webhook-6320/e2e-test-webhook\"\nI0802 09:22:49.038752       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:49.075657       1 proxier.go:826] syncProxyRules took 37.531386ms\nI0802 09:22:56.339411       1 service.go:275] Service services-2118/externalname-service updated: 1 ports\nI0802 09:22:56.340153       1 service.go:390] Adding new service port \"services-2118/externalname-service:http\" at 100.70.236.135:80/TCP\nI0802 09:22:56.340216       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:56.383751       1 proxier.go:1715] Opened local port \"nodePort for services-2118/externalname-service:http\" (:30683/tcp)\nI0802 09:22:56.390683       1 proxier.go:826] syncProxyRules took 51.233155ms\nI0802 09:22:56.391204       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:56.431826       1 proxier.go:826] syncProxyRules took 41.093176ms\nI0802 09:22:57.843667       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:57.876652       1 proxier.go:826] syncProxyRules took 33.422625ms\nI0802 09:22:58.877392       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:58.927307       1 proxier.go:826] syncProxyRules took 50.513348ms\nI0802 09:23:01.292600       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:01.358187       1 proxier.go:826] syncProxyRules took 66.068072ms\nI0802 09:23:01.482839       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:01.513546       1 proxier.go:826] syncProxyRules took 31.17163ms\nI0802 09:23:01.654254       1 service.go:275] Service services-6870/nodeport-update-service updated: 0 ports\nI0802 09:23:02.297058       1 service.go:415] Removing service port \"services-6870/nodeport-update-service:tcp-port\"\nI0802 09:23:02.297090       1 service.go:415] Removing service port \"services-6870/nodeport-update-service:udp-port\"\nI0802 09:23:02.297182       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:02.414723       1 proxier.go:826] syncProxyRules took 118.179263ms\nI0802 09:23:02.739384       1 service.go:275] Service dns-7425/test-service-2 updated: 0 ports\nI0802 09:23:03.415244       1 service.go:415] Removing service port \"dns-7425/test-service-2:http\"\nI0802 09:23:03.415381       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:03.457405       1 proxier.go:826] syncProxyRules took 42.5652ms\nI0802 09:23:07.564452       1 service.go:275] Service services-6709/clusterip-service updated: 1 ports\nI0802 09:23:07.564926       1 service.go:390] Adding new service port \"services-6709/clusterip-service\" at 100.64.155.117:80/TCP\nI0802 09:23:07.565017       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:07.593013       1 proxier.go:826] syncProxyRules took 28.519515ms\nI0802 09:23:07.593441       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:07.620223       1 proxier.go:826] syncProxyRules took 27.176089ms\nI0802 09:23:07.760765       1 service.go:275] Service services-6709/externalsvc updated: 1 ports\nI0802 09:23:08.620904       1 service.go:390] Adding new service port \"services-6709/externalsvc\" at 100.64.23.153:80/TCP\nI0802 09:23:08.621163       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:08.662852       1 proxier.go:826] syncProxyRules took 42.489363ms\nI0802 09:23:09.604791       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:09.635778       1 proxier.go:826] syncProxyRules took 31.381396ms\nI0802 09:23:11.727042       1 service.go:275] Service services-6709/clusterip-service updated: 0 ports\nI0802 09:23:11.728026       1 service.go:415] Removing service port \"services-6709/clusterip-service\"\nI0802 09:23:11.728115       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:11.761498       1 proxier.go:826] syncProxyRules took 34.411618ms\nI0802 09:23:11.761949       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:11.788627       1 proxier.go:826] syncProxyRules took 27.093734ms\nI0802 09:23:13.925921       1 service.go:275] Service crd-webhook-3792/e2e-test-crd-conversion-webhook updated: 1 ports\nI0802 09:23:13.926664       1 service.go:390] Adding new service port \"crd-webhook-3792/e2e-test-crd-conversion-webhook\" at 100.68.77.198:9443/TCP\nI0802 09:23:13.926734       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:13.959246       1 proxier.go:826] syncProxyRules took 33.285052ms\nI0802 09:23:13.959643       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:13.986332       1 proxier.go:826] syncProxyRules took 27.042181ms\nI0802 09:23:15.683845       1 service.go:275] Service services-2118/externalname-service updated: 0 ports\nI0802 09:23:15.684346       1 service.go:415] Removing service port \"services-2118/externalname-service:http\"\nI0802 09:23:15.684426       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:15.743000       1 proxier.go:826] syncProxyRules took 59.111918ms\nI0802 09:23:16.743613       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:16.770956       1 proxier.go:826] syncProxyRules took 27.802507ms\nI0802 09:23:16.973691       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:17.004423       1 proxier.go:826] syncProxyRules took 31.143804ms\nI0802 09:23:17.665451       1 service.go:275] Service endpointslice-7693/example-int-port updated: 0 ports\nI0802 09:23:17.680205       1 service.go:275] Service endpointslice-7693/example-named-port updated: 0 ports\nI0802 09:23:17.724410       1 service.go:275] Service endpointslice-7693/example-no-match updated: 0 ports\nI0802 09:23:17.799125       1 service.go:275] Service crd-webhook-3792/e2e-test-crd-conversion-webhook updated: 0 ports\nI0802 09:23:18.005293       1 service.go:415] Removing service port \"crd-webhook-3792/e2e-test-crd-conversion-webhook\"\nI0802 09:23:18.005331       1 service.go:415] Removing service port \"endpointslice-7693/example-int-port:example\"\nI0802 09:23:18.005361       1 service.go:415] Removing service port \"endpointslice-7693/example-named-port:http\"\nI0802 09:23:18.005372       1 service.go:415] Removing service port \"endpointslice-7693/example-no-match:example-no-match\"\nI0802 09:23:18.005568       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:18.058999       1 proxier.go:826] syncProxyRules took 54.41854ms\nI0802 09:23:22.161891       1 service.go:275] Service conntrack-3978/svc-udp updated: 1 ports\nI0802 09:23:22.162360       1 service.go:390] Adding new service port \"conntrack-3978/svc-udp:udp\" at 100.69.241.82:80/UDP\nI0802 09:23:22.162428       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:22.190336       1 proxier.go:1715] Opened local port \"nodePort for conntrack-3978/svc-udp:udp\" (:30862/udp)\nI0802 09:23:22.195483       1 proxier.go:826] syncProxyRules took 33.548818ms\nI0802 09:23:22.196536       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:22.228942       1 proxier.go:826] syncProxyRules took 33.423695ms\nI0802 09:23:24.812999       1 service.go:275] Service provisioning-2278-17/csi-hostpath-attacher updated: 1 ports\nI0802 09:23:24.813432       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpath-attacher:dummy\" at 100.70.66.174:12345/TCP\nI0802 09:23:24.813498       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:24.840291       1 proxier.go:826] syncProxyRules took 27.25201ms\nI0802 09:23:24.840822       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:24.867903       1 proxier.go:826] syncProxyRules took 27.559941ms\nI0802 09:23:25.386166       1 service.go:275] Service provisioning-2278-17/csi-hostpathplugin updated: 1 ports\nI0802 09:23:25.769823       1 service.go:275] Service provisioning-2278-17/csi-hostpath-provisioner updated: 1 ports\nI0802 09:23:25.868498       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpathplugin:dummy\" at 100.68.74.21:12345/TCP\nI0802 09:23:25.868531       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpath-provisioner:dummy\" at 100.68.177.103:12345/TCP\nI0802 09:23:25.868602       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:25.895396       1 proxier.go:826] syncProxyRules took 27.371251ms\nI0802 09:23:26.162207       1 service.go:275] Service provisioning-2278-17/csi-hostpath-resizer updated: 1 ports\nI0802 09:23:26.546345       1 service.go:275] Service provisioning-2278-17/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:23:26.895930       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpath-resizer:dummy\" at 100.67.68.161:12345/TCP\nI0802 09:23:26.896001       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpath-snapshotter:dummy\" at 100.68.107.88:12345/TCP\nI0802 09:23:26.896122       1 proxier.go:858] Stale udp service conntrack-3978/svc-udp:udp -> 100.69.241.82\nI0802 09:23:26.896186       1 proxier.go:865] Stale udp service NodePort conntrack-3978/svc-udp:udp -> 30862\nI0802 09:23:26.896210       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:26.928302       1 proxier.go:826] syncProxyRules took 32.759533ms\nI0802 09:23:26.949040       1 service.go:275] Service services-6709/externalsvc updated: 0 ports\nI0802 09:23:27.928826       1 service.go:415] Removing service port \"services-6709/externalsvc\"\nI0802 09:23:27.928917       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:27.955063       1 proxier.go:826] syncProxyRules took 26.623779ms\nI0802 09:23:28.739359       1 service.go:275] Service proxy-4404/proxy-service-b6j8z updated: 4 ports\nI0802 09:23:28.955869       1 service.go:390] Adding new service port \"proxy-4404/proxy-service-b6j8z:portname2\" at 100.69.18.219:81/TCP\nI0802 09:23:28.955901       1 service.go:390] Adding new service port \"proxy-4404/proxy-service-b6j8z:tlsportname1\" at 100.69.18.219:443/TCP\nI0802 09:23:28.955912       1 service.go:390] Adding new service port \"proxy-4404/proxy-service-b6j8z:tlsportname2\" at 100.69.18.219:444/TCP\nI0802 09:23:28.955922       1 service.go:390] Adding new service port \"proxy-4404/proxy-service-b6j8z:portname1\" at 100.69.18.219:80/TCP\nI0802 09:23:28.956008       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:28.994295       1 proxier.go:826] syncProxyRules took 38.847677ms\nI0802 09:23:30.118463       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:30.150145       1 proxier.go:826] syncProxyRules took 32.118554ms\nI0802 09:23:35.881121       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:35.934382       1 proxier.go:826] syncProxyRules took 53.832573ms\nI0802 09:23:37.137510       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:37.171701       1 proxier.go:826] syncProxyRules took 35.455725ms\nI0802 09:23:37.172283       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:37.199440       1 proxier.go:826] syncProxyRules took 27.706359ms\nI0802 09:23:42.226792       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:42.269532       1 proxier.go:826] syncProxyRules took 43.493624ms\nI0802 09:23:46.537899       1 service.go:275] Service pods-9854/fooservice updated: 1 ports\nI0802 09:23:46.538695       1 service.go:390] Adding new service port \"pods-9854/fooservice\" at 100.68.243.160:8765/TCP\nI0802 09:23:46.538822       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:46.614550       1 proxier.go:826] syncProxyRules took 76.608661ms\nI0802 09:23:46.615369       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:46.725775       1 proxier.go:826] syncProxyRules took 111.182569ms\nI0802 09:23:52.235036       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:52.275357       1 proxier.go:826] syncProxyRules took 40.923396ms\nI0802 09:23:52.287834       1 service.go:275] Service proxy-4404/proxy-service-b6j8z updated: 0 ports\nI0802 09:23:52.288387       1 service.go:415] Removing service port \"proxy-4404/proxy-service-b6j8z:portname1\"\nI0802 09:23:52.288414       1 service.go:415] Removing service port \"proxy-4404/proxy-service-b6j8z:portname2\"\nI0802 09:23:52.288475       1 service.go:415] Removing service port \"proxy-4404/proxy-service-b6j8z:tlsportname1\"\nI0802 09:23:52.288489       1 service.go:415] Removing service port \"proxy-4404/proxy-service-b6j8z:tlsportname2\"\nI0802 09:23:52.288561       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:52.330698       1 proxier.go:826] syncProxyRules took 42.821337ms\nI0802 09:23:55.293532       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:55.349712       1 proxier.go:826] syncProxyRules took 56.814098ms\nI0802 09:23:55.453255       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:55.507108       1 service.go:275] Service pods-9854/fooservice updated: 0 ports\nI0802 09:23:55.514695       1 proxier.go:826] syncProxyRules took 63.342471ms\nI0802 09:23:56.515460       1 service.go:415] Removing service port \"pods-9854/fooservice\"\nI0802 09:23:56.515604       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:56.547522       1 proxier.go:826] syncProxyRules took 32.515589ms\nI0802 09:23:57.366056       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:57.401662       1 proxier.go:826] syncProxyRules took 36.079335ms\nI0802 09:23:58.406550       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-attacher updated: 1 ports\nI0802 09:23:58.407058       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-attacher:dummy\" at 100.67.215.140:12345/TCP\nI0802 09:23:58.407124       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:58.446222       1 proxier.go:826] syncProxyRules took 39.636387ms\nI0802 09:23:58.987457       1 service.go:275] Service ephemeral-9710-9555/csi-hostpathplugin updated: 1 ports\nI0802 09:23:59.447036       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpathplugin:dummy\" at 100.70.173.208:12345/TCP\nI0802 09:23:59.447142       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:59.492272       1 proxier.go:826] syncProxyRules took 45.908797ms\nI0802 09:23:59.578677       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-provisioner updated: 1 ports\nI0802 09:23:59.971546       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-resizer updated: 1 ports\nI0802 09:24:00.405292       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-provisioner:dummy\" at 100.64.220.139:12345/TCP\nI0802 09:24:00.405322       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-resizer:dummy\" at 100.65.129.225:12345/TCP\nI0802 09:24:00.405424       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:00.429212       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:24:00.442926       1 proxier.go:826] syncProxyRules took 38.27503ms\nI0802 09:24:01.443612       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-snapshotter:dummy\" at 100.69.208.138:12345/TCP\nI0802 09:24:01.443735       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:01.472192       1 proxier.go:826] syncProxyRules took 29.054818ms\nI0802 09:24:01.608287       1 service.go:275] Service conntrack-3978/svc-udp updated: 0 ports\nI0802 09:24:02.472833       1 service.go:415] Removing service port \"conntrack-3978/svc-udp:udp\"\nI0802 09:24:02.473020       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:02.504788       1 proxier.go:826] syncProxyRules took 32.43999ms\nI0802 09:24:03.506228       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:03.547050       1 proxier.go:826] syncProxyRules took 42.137275ms\nI0802 09:24:04.869571       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:04.909114       1 proxier.go:826] syncProxyRules took 40.187955ms\nI0802 09:24:05.909992       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:05.966168       1 proxier.go:826] syncProxyRules took 56.916282ms\nI0802 09:24:07.925240       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:07.980898       1 proxier.go:826] syncProxyRules took 56.330207ms\nI0802 09:24:09.233894       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:09.371370       1 proxier.go:826] syncProxyRules took 138.125823ms\nI0802 09:24:09.570490       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:09.630801       1 proxier.go:826] syncProxyRules took 60.957536ms\nI0802 09:24:12.055154       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:12.083233       1 proxier.go:826] syncProxyRules took 28.507786ms\nI0802 09:24:12.142360       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:12.170573       1 proxier.go:826] syncProxyRules took 28.626122ms\nI0802 09:24:13.171152       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:13.198981       1 proxier.go:826] syncProxyRules took 28.263501ms\nI0802 09:24:14.489708       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:14.536572       1 proxier.go:826] syncProxyRules took 48.35388ms\nI0802 09:24:15.537360       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:15.578462       1 proxier.go:826] syncProxyRules took 41.727923ms\nI0802 09:24:16.685235       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-attacher updated: 1 ports\nI0802 09:24:16.685615       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-attacher:dummy\" at 100.67.115.219:12345/TCP\nI0802 09:24:16.685681       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:16.713670       1 proxier.go:826] syncProxyRules took 28.395505ms\nI0802 09:24:17.266697       1 service.go:275] Service volume-expand-9751-6514/csi-hostpathplugin updated: 1 ports\nI0802 09:24:17.267174       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpathplugin:dummy\" at 100.71.58.236:12345/TCP\nI0802 09:24:17.267259       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:17.304346       1 proxier.go:826] syncProxyRules took 37.580971ms\nI0802 09:24:17.653880       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-provisioner updated: 1 ports\nI0802 09:24:18.047448       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-resizer updated: 1 ports\nI0802 09:24:18.057744       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-provisioner:dummy\" at 100.67.64.219:12345/TCP\nI0802 09:24:18.057773       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-resizer:dummy\" at 100.65.122.98:12345/TCP\nI0802 09:24:18.057839       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:18.085596       1 proxier.go:826] syncProxyRules took 28.21662ms\nI0802 09:24:18.430126       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:24:19.086297       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-snapshotter:dummy\" at 100.71.86.20:12345/TCP\nI0802 09:24:19.086403       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:19.127449       1 proxier.go:826] syncProxyRules took 41.673474ms\nI0802 09:24:20.128066       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:20.156252       1 proxier.go:826] syncProxyRules took 28.672319ms\nI0802 09:24:21.157077       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:21.212714       1 proxier.go:826] syncProxyRules took 56.279822ms\nI0802 09:24:22.213679       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:22.273885       1 proxier.go:826] syncProxyRules took 60.931142ms\nI0802 09:24:23.275214       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:23.317648       1 proxier.go:826] syncProxyRules took 43.677475ms\nI0802 09:24:24.318441       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:24.367592       1 proxier.go:826] syncProxyRules took 49.744919ms\nI0802 09:24:27.649844       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:27.699271       1 proxier.go:826] syncProxyRules took 49.993165ms\nI0802 09:24:27.699848       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:27.780341       1 proxier.go:826] syncProxyRules took 81.034023ms\nI0802 09:24:28.718180       1 service.go:275] Service provisioning-2278-17/csi-hostpath-attacher updated: 0 ports\nI0802 09:24:28.718724       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpath-attacher:dummy\"\nI0802 09:24:28.718822       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:28.764912       1 proxier.go:826] syncProxyRules took 46.689659ms\nI0802 09:24:29.337504       1 service.go:275] Service provisioning-2278-17/csi-hostpathplugin updated: 0 ports\nI0802 09:24:29.737420       1 service.go:275] Service provisioning-2278-17/csi-hostpath-provisioner updated: 0 ports\nI0802 09:24:29.737891       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpathplugin:dummy\"\nI0802 09:24:29.737915       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpath-provisioner:dummy\"\nI0802 09:24:29.738026       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:29.766469       1 proxier.go:826] syncProxyRules took 29.002672ms\nI0802 09:24:30.138095       1 service.go:275] Service provisioning-2278-17/csi-hostpath-resizer updated: 0 ports\nI0802 09:24:30.534473       1 service.go:275] Service provisioning-2278-17/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:24:30.767041       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpath-resizer:dummy\"\nI0802 09:24:30.767076       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpath-snapshotter:dummy\"\nI0802 09:24:30.767173       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:30.814122       1 proxier.go:826] syncProxyRules took 47.522978ms\nI0802 09:24:41.715739       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0802 09:24:41.716208       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0802 09:25:53.885414       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:53.918509       1 proxier.go:826] syncProxyRules took 33.581686ms\nI0802 09:25:53.919173       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:53.955117       1 proxier.go:826] syncProxyRules took 36.573148ms\nI0802 09:25:55.192129       1 service.go:275] Service services-1470/up-down-1 updated: 1 ports\nI0802 09:25:55.192626       1 service.go:390] Adding new service port \"services-1470/up-down-1\" at 100.70.4.215:80/TCP\nI0802 09:25:55.192704       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:55.224672       1 proxier.go:826] syncProxyRules took 32.501999ms\nI0802 09:25:56.225382       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:56.268072       1 proxier.go:826] syncProxyRules took 43.258866ms\nI0802 09:25:56.782135       1 service.go:275] Service kubectl-4027/agnhost-replica updated: 1 ports\nI0802 09:25:57.268795       1 service.go:390] Adding new service port \"kubectl-4027/agnhost-replica\" at 100.71.189.13:6379/TCP\nI0802 09:25:57.268891       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:57.332010       1 proxier.go:826] syncProxyRules took 63.797869ms\nI0802 09:25:58.332716       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:58.361024       1 proxier.go:826] syncProxyRules took 28.848899ms\nI0802 09:25:58.561341       1 service.go:275] Service kubectl-4027/agnhost-primary updated: 1 ports\nI0802 09:25:59.317558       1 service.go:390] Adding new service port \"kubectl-4027/agnhost-primary\" at 100.66.15.222:6379/TCP\nI0802 09:25:59.317665       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:59.364059       1 proxier.go:826] syncProxyRules took 52.649343ms\nI0802 09:25:59.892685       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:59.962643       1 proxier.go:826] syncProxyRules took 70.532757ms\nI0802 09:26:00.356313       1 service.go:275] Service kubectl-4027/frontend updated: 1 ports\nI0802 09:26:00.963288       1 service.go:390] Adding new service port \"kubectl-4027/frontend\" at 100.67.213.81:80/TCP\nI0802 09:26:00.963360       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:01.020526       1 proxier.go:826] syncProxyRules took 57.753532ms\nI0802 09:26:01.288115       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-attacher updated: 1 ports\nI0802 09:26:01.877803       1 service.go:275] Service volume-expand-5125-7028/csi-hostpathplugin updated: 1 ports\nI0802 09:26:01.886952       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-attacher:dummy\" at 100.68.29.179:12345/TCP\nI0802 09:26:01.887002       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpathplugin:dummy\" at 100.68.226.200:12345/TCP\nI0802 09:26:01.887070       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:01.916349       1 proxier.go:826] syncProxyRules took 29.786715ms\nI0802 09:26:02.265635       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-provisioner updated: 1 ports\nI0802 09:26:02.663135       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-resizer updated: 1 ports\nI0802 09:26:02.916908       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-provisioner:dummy\" at 100.66.131.101:12345/TCP\nI0802 09:26:02.916945       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-resizer:dummy\" at 100.66.76.190:12345/TCP\nI0802 09:26:02.917061       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:02.945247       1 proxier.go:826] syncProxyRules took 28.74452ms\nI0802 09:26:03.052786       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:26:03.947690       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-snapshotter:dummy\" at 100.69.27.176:12345/TCP\nI0802 09:26:03.948021       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:04.026177       1 proxier.go:826] syncProxyRules took 79.13525ms\nI0802 09:26:04.965081       1 service.go:275] Service services-1470/up-down-2 updated: 1 ports\nI0802 09:26:04.965650       1 service.go:390] Adding new service port \"services-1470/up-down-2\" at 100.70.71.58:80/TCP\nI0802 09:26:04.965761       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:04.993666       1 proxier.go:826] syncProxyRules took 28.54675ms\nI0802 09:26:05.905796       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:05.935895       1 proxier.go:826] syncProxyRules took 30.611606ms\nI0802 09:26:05.982063       1 service.go:275] Service webhook-9862/e2e-test-webhook updated: 1 ports\nI0802 09:26:06.897345       1 service.go:390] Adding new service port \"webhook-9862/e2e-test-webhook\" at 100.66.227.222:8443/TCP\nI0802 09:26:06.897468       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:06.929910       1 proxier.go:826] syncProxyRules took 33.093433ms\nI0802 09:26:07.930690       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:07.964719       1 proxier.go:826] syncProxyRules took 34.668686ms\nI0802 09:26:08.965620       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:09.014440       1 proxier.go:826] syncProxyRules took 49.551537ms\nI0802 09:26:10.018074       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:10.070800       1 proxier.go:826] syncProxyRules took 54.004862ms\nI0802 09:26:10.887911       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:10.918842       1 proxier.go:826] syncProxyRules took 31.536845ms\nI0802 09:26:11.320239       1 service.go:275] Service webhook-9862/e2e-test-webhook updated: 0 ports\nI0802 09:26:11.919535       1 service.go:415] Removing service port \"webhook-9862/e2e-test-webhook\"\nI0802 09:26:11.919674       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:11.950056       1 proxier.go:826] syncProxyRules took 31.037788ms\nI0802 09:26:12.950916       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:12.981829       1 proxier.go:826] syncProxyRules took 31.537762ms\nI0802 09:26:13.982612       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:14.016003       1 proxier.go:826] syncProxyRules took 33.982575ms\nI0802 09:26:14.987018       1 service.go:275] Service kubectl-4027/agnhost-replica updated: 0 ports\nI0802 09:26:14.987721       1 service.go:415] Removing service port \"kubectl-4027/agnhost-replica\"\nI0802 09:26:14.987858       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:15.037158       1 proxier.go:826] syncProxyRules took 50.084073ms\nI0802 09:26:15.735881       1 service.go:275] Service services-5870/service-headless-toggled updated: 1 ports\nI0802 09:26:15.871403       1 service.go:275] Service kubectl-4027/agnhost-primary updated: 0 ports\nI0802 09:26:15.904049       1 service.go:390] Adding new service port \"services-5870/service-headless-toggled\" at 100.66.82.148:80/TCP\nI0802 09:26:15.904148       1 service.go:415] Removing service port \"kubectl-4027/agnhost-primary\"\nI0802 09:26:15.904280       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:15.947897       1 proxier.go:826] syncProxyRules took 44.34424ms\nI0802 09:26:16.793844       1 service.go:275] Service kubectl-4027/frontend updated: 0 ports\nI0802 09:26:16.948711       1 service.go:415] Removing service port \"kubectl-4027/frontend\"\nI0802 09:26:16.948818       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:16.990528       1 proxier.go:826] syncProxyRules took 42.389642ms\nI0802 09:26:18.178610       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:18.212897       1 proxier.go:826] syncProxyRules took 34.910858ms\nI0802 09:26:18.896390       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:18.925851       1 proxier.go:826] syncProxyRules took 29.966393ms\nI0802 09:26:31.266384       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 0 ports\nI0802 09:26:31.266839       1 service.go:415] Removing service port \"deployment-4314/test-rolling-update-with-lb\"\nI0802 09:26:31.266935       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:31.311925       1 service_health.go:83] Closing healthcheck \"deployment-4314/test-rolling-update-with-lb\" on port 31777\nI0802 09:26:31.312017       1 proxier.go:826] syncProxyRules took 45.594433ms\nI0802 09:26:36.843771       1 service.go:275] Service services-5870/service-headless-toggled updated: 0 ports\nI0802 09:26:36.844310       1 service.go:415] Removing service port \"services-5870/service-headless-toggled\"\nI0802 09:26:36.844392       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:36.886487       1 proxier.go:826] syncProxyRules took 42.678963ms\nI0802 09:26:39.420517       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:39.455000       1 proxier.go:826] syncProxyRules took 34.960515ms\nI0802 09:26:40.422515       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:40.450693       1 proxier.go:826] syncProxyRules took 28.614162ms\nI0802 09:26:42.883669       1 service.go:275] Service webhook-3975/e2e-test-webhook updated: 1 ports\nI0802 09:26:42.884146       1 service.go:390] Adding new service port \"webhook-3975/e2e-test-webhook\" at 100.69.160.98:8443/TCP\nI0802 09:26:42.884223       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:42.913982       1 proxier.go:826] syncProxyRules took 30.253885ms\nI0802 09:26:42.914498       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:42.943243       1 proxier.go:826] syncProxyRules took 29.225933ms\nI0802 09:26:44.026502       1 service.go:275] Service services-5870/service-headless-toggled updated: 1 ports\nI0802 09:26:44.026950       1 service.go:390] Adding new service port \"services-5870/service-headless-toggled\" at 100.66.82.148:80/TCP\nI0802 09:26:44.027052       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:44.056308       1 proxier.go:826] syncProxyRules took 29.770191ms\nI0802 09:26:46.299329       1 service.go:275] Service services-1470/up-down-1 updated: 0 ports\nI0802 09:26:46.299798       1 service.go:415] Removing service port \"services-1470/up-down-1\"\nI0802 09:26:46.299882       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:46.328745       1 proxier.go:826] syncProxyRules took 29.375779ms\nI0802 09:26:46.908175       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:46.944491       1 proxier.go:826] syncProxyRules took 36.889968ms\nI0802 09:26:47.429548       1 service.go:275] Service webhook-3975/e2e-test-webhook updated: 0 ports\nI0802 09:26:47.430006       1 service.go:415] Removing service port \"webhook-3975/e2e-test-webhook\"\nI0802 09:26:47.430088       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:47.465773       1 proxier.go:826] syncProxyRules took 36.182241ms\nI0802 09:26:48.467100       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:48.501213       1 proxier.go:826] syncProxyRules took 35.292142ms\nI0802 09:27:03.386706       1 service.go:275] Service services-1470/up-down-3 updated: 1 ports\nI0802 09:27:03.387161       1 service.go:390] Adding new service port \"services-1470/up-down-3\" at 100.64.1.233:80/TCP\nI0802 09:27:03.387219       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:03.415699       1 proxier.go:826] syncProxyRules took 28.949694ms\nI0802 09:27:03.416241       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:03.446047       1 proxier.go:826] syncProxyRules took 30.312267ms\nI0802 09:27:05.570283       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:05.613164       1 proxier.go:826] syncProxyRules took 43.273841ms\nI0802 09:27:05.750510       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:05.780007       1 proxier.go:826] syncProxyRules took 29.930493ms\nI0802 09:27:06.256538       1 service.go:275] Service services-5870/service-headless-toggled updated: 0 ports\nI0802 09:27:06.780897       1 service.go:415] Removing service port \"services-5870/service-headless-toggled\"\nI0802 09:27:06.781165       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:06.830580       1 proxier.go:826] syncProxyRules took 50.398078ms\nI0802 09:27:08.951952       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-attacher updated: 1 ports\nI0802 09:27:08.952397       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-attacher:dummy\" at 100.64.190.82:12345/TCP\nI0802 09:27:08.952473       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:08.993057       1 proxier.go:826] syncProxyRules took 41.05708ms\nI0802 09:27:08.993700       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:09.027379       1 proxier.go:826] syncProxyRules took 34.285267ms\nI0802 09:27:09.533620       1 service.go:275] Service provisioning-2971-8427/csi-hostpathplugin updated: 1 ports\nI0802 09:27:09.917651       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-provisioner updated: 1 ports\nI0802 09:27:10.027927       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpathplugin:dummy\" at 100.65.218.93:12345/TCP\nI0802 09:27:10.027963       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-provisioner:dummy\" at 100.66.137.241:12345/TCP\nI0802 09:27:10.028072       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:10.058211       1 proxier.go:826] syncProxyRules took 30.681799ms\nI0802 09:27:10.303707       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-resizer updated: 1 ports\nI0802 09:27:10.693105       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:27:11.059072       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-resizer:dummy\" at 100.67.174.173:12345/TCP\nI0802 09:27:11.059103       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-snapshotter:dummy\" at 100.68.17.221:12345/TCP\nI0802 09:27:11.059198       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:11.100999       1 proxier.go:826] syncProxyRules took 42.521206ms\nI0802 09:27:17.835338       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:17.888118       1 proxier.go:826] syncProxyRules took 53.37444ms\nI0802 09:27:18.831681       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:18.869141       1 proxier.go:826] syncProxyRules took 37.956348ms\nI0802 09:27:19.834423       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-attacher updated: 0 ports\nI0802 09:27:19.834870       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-attacher:dummy\"\nI0802 09:27:19.834953       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:19.863883       1 proxier.go:826] syncProxyRules took 29.41031ms\nI0802 09:27:20.305142       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:20.350339       1 proxier.go:826] syncProxyRules took 45.719321ms\nI0802 09:27:20.420263       1 service.go:275] Service volume-expand-5125-7028/csi-hostpathplugin updated: 0 ports\nI0802 09:27:20.811188       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-provisioner updated: 0 ports\nI0802 09:27:21.204199       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-resizer updated: 0 ports\nI0802 09:27:21.204632       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpathplugin:dummy\"\nI0802 09:27:21.204651       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-provisioner:dummy\"\nI0802 09:27:21.204659       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-resizer:dummy\"\nI0802 09:27:21.204761       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:21.244173       1 proxier.go:826] syncProxyRules took 39.936913ms\nI0802 09:27:21.594352       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:27:22.244875       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-snapshotter:dummy\"\nI0802 09:27:22.245070       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:22.273880       1 proxier.go:826] syncProxyRules took 29.45076ms\nI0802 09:27:23.431430       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:23.463614       1 proxier.go:826] syncProxyRules took 32.68278ms\nI0802 09:27:23.708287       1 service.go:275] Service webhook-8390/e2e-test-webhook updated: 1 ports\nI0802 09:27:24.464483       1 service.go:390] Adding new service port \"webhook-8390/e2e-test-webhook\" at 100.66.116.29:8443/TCP\nI0802 09:27:24.464598       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:24.492964       1 proxier.go:826] syncProxyRules took 29.057961ms\nI0802 09:27:31.643236       1 service.go:275] Service dns-5822/dns-test-service-3 updated: 1 ports\nI0802 09:27:31.643726       1 service.go:390] Adding new service port \"dns-5822/dns-test-service-3:http\" at 100.65.38.43:80/TCP\nI0802 09:27:31.643811       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:31.673136       1 proxier.go:826] syncProxyRules took 29.857744ms\nI0802 09:27:33.140438       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:33.179687       1 service.go:275] Service services-1470/up-down-2 updated: 0 ports\nI0802 09:27:33.181131       1 proxier.go:826] syncProxyRules took 41.122446ms\nI0802 09:27:33.181580       1 service.go:415] Removing service port \"services-1470/up-down-2\"\nI0802 09:27:33.181714       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:33.203703       1 service.go:275] Service services-1470/up-down-3 updated: 0 ports\nI0802 09:27:33.214527       1 proxier.go:826] syncProxyRules took 33.363467ms\nI0802 09:27:34.215120       1 service.go:415] Removing service port \"services-1470/up-down-3\"\nI0802 09:27:34.215223       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:34.251269       1 proxier.go:826] syncProxyRules took 36.599947ms\nI0802 09:27:37.362363       1 service.go:275] Service dns-5822/dns-test-service-3 updated: 0 ports\nI0802 09:27:37.362824       1 service.go:415] Removing service port \"dns-5822/dns-test-service-3:http\"\nI0802 09:27:37.362922       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:37.403755       1 proxier.go:826] syncProxyRules took 41.334044ms\nI0802 09:27:38.407062       1 service.go:275] Service webhook-8390/e2e-test-webhook updated: 0 ports\nI0802 09:27:38.407531       1 service.go:415] Removing service port \"webhook-8390/e2e-test-webhook\"\nI0802 09:27:38.407611       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:38.435627       1 proxier.go:826] syncProxyRules took 28.53036ms\nI0802 09:27:38.436152       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:38.465037       1 proxier.go:826] syncProxyRules took 29.37568ms\nI0802 09:27:39.869823       1 service.go:275] Service webhook-4152/e2e-test-webhook updated: 1 ports\nI0802 09:27:39.870331       1 service.go:390] Adding new service port \"webhook-4152/e2e-test-webhook\" at 100.66.210.237:8443/TCP\nI0802 09:27:39.870411       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:39.898147       1 proxier.go:826] syncProxyRules took 28.280228ms\nI0802 09:27:40.898853       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:40.928301       1 proxier.go:826] syncProxyRules took 29.925186ms\nI0802 09:27:42.977819       1 service.go:275] Service webhook-4152/e2e-test-webhook updated: 0 ports\nI0802 09:27:42.978349       1 service.go:415] Removing service port \"webhook-4152/e2e-test-webhook\"\nI0802 09:27:42.978443       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:43.044342       1 proxier.go:826] syncProxyRules took 66.479745ms\nI0802 09:27:43.449831       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:43.503741       1 proxier.go:826] syncProxyRules took 54.361457ms\nI0802 09:28:09.155326       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-attacher updated: 1 ports\nI0802 09:28:09.155845       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-attacher:dummy\" at 100.65.180.222:12345/TCP\nI0802 09:28:09.155922       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:09.194247       1 proxier.go:826] syncProxyRules took 38.883445ms\nI0802 09:28:09.194754       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:09.235871       1 proxier.go:826] syncProxyRules took 41.542353ms\nI0802 09:28:09.730545       1 service.go:275] Service provisioning-4508-9194/csi-hostpathplugin updated: 1 ports\nI0802 09:28:10.116112       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-provisioner updated: 1 ports\nI0802 09:28:10.236583       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpathplugin:dummy\" at 100.65.224.153:12345/TCP\nI0802 09:28:10.236617       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-provisioner:dummy\" at 100.68.168.136:12345/TCP\nI0802 09:28:10.236690       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:10.277875       1 proxier.go:826] syncProxyRules took 41.821999ms\nI0802 09:28:10.504994       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-resizer updated: 1 ports\nI0802 09:28:10.890027       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:28:11.170561       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-resizer:dummy\" at 100.64.131.120:12345/TCP\nI0802 09:28:11.170591       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-snapshotter:dummy\" at 100.71.16.190:12345/TCP\nI0802 09:28:11.170787       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:11.199100       1 proxier.go:826] syncProxyRules took 28.94049ms\nI0802 09:28:12.363370       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:12.393649       1 proxier.go:826] syncProxyRules took 31.439332ms\nI0802 09:28:12.683386       1 service.go:275] Service volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4 updated: 1 ports\nI0802 09:28:13.395181       1 service.go:390] Adding new service port \"volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4\" at 100.66.79.76:1/TCP\nI0802 09:28:13.395375       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:13.478600       1 proxier.go:826] syncProxyRules took 84.790407ms\nI0802 09:28:13.791160       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-attacher updated: 0 ports\nI0802 09:28:14.253739       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-attacher:dummy\"\nI0802 09:28:14.253854       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:14.289473       1 proxier.go:826] syncProxyRules took 36.134624ms\nI0802 09:28:14.387906       1 service.go:275] Service provisioning-2971-8427/csi-hostpathplugin updated: 0 ports\nI0802 09:28:14.781579       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-provisioner updated: 0 ports\nI0802 09:28:15.173422       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-resizer updated: 0 ports\nI0802 09:28:15.173944       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpathplugin:dummy\"\nI0802 09:28:15.173997       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-provisioner:dummy\"\nI0802 09:28:15.174008       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-resizer:dummy\"\nI0802 09:28:15.174104       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:15.218845       1 proxier.go:826] syncProxyRules took 45.380705ms\nI0802 09:28:15.570696       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:28:16.048155       1 service.go:275] Service volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4 updated: 0 ports\nI0802 09:28:16.219487       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-snapshotter:dummy\"\nI0802 09:28:16.219523       1 service.go:415] Removing service port \"volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4\"\nI0802 09:28:16.219634       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:16.248765       1 proxier.go:826] syncProxyRules took 29.749593ms\nI0802 09:28:19.376379       1 service.go:275] Service services-878/service-proxy-toggled updated: 1 ports\nI0802 09:28:19.376921       1 service.go:390] Adding new service port \"services-878/service-proxy-toggled\" at 100.69.249.190:80/TCP\nI0802 09:28:19.377043       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:19.408054       1 proxier.go:826] syncProxyRules took 31.634614ms\nI0802 09:28:19.409052       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:19.442163       1 proxier.go:826] syncProxyRules took 33.903027ms\nI0802 09:28:19.809801       1 service.go:275] Service webhook-3845/e2e-test-webhook updated: 1 ports\nI0802 09:28:20.442796       1 service.go:390] Adding new service port \"webhook-3845/e2e-test-webhook\" at 100.65.164.170:8443/TCP\nI0802 09:28:20.442899       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:20.491484       1 proxier.go:826] syncProxyRules took 49.179369ms\nI0802 09:28:21.492288       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:21.520826       1 proxier.go:826] syncProxyRules took 29.181521ms\nI0802 09:28:25.984708       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:26.197773       1 proxier.go:826] syncProxyRules took 213.741296ms\nI0802 09:28:37.455418       1 service.go:275] Service webhook-3845/e2e-test-webhook updated: 0 ports\nI0802 09:28:37.456014       1 service.go:415] Removing service port \"webhook-3845/e2e-test-webhook\"\nI0802 09:28:37.456170       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:37.486546       1 proxier.go:826] syncProxyRules took 31.087723ms\nI0802 09:28:37.487116       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:37.518255       1 proxier.go:826] syncProxyRules took 31.674227ms\nI0802 09:28:40.099864       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-attacher updated: 0 ports\nI0802 09:28:40.100391       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-attacher:dummy\"\nI0802 09:28:40.100469       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:40.138948       1 proxier.go:826] syncProxyRules took 39.045485ms\nI0802 09:28:40.142027       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:40.183356       1 proxier.go:826] syncProxyRules took 41.886276ms\nI0802 09:28:40.283230       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-attacher updated: 1 ports\nI0802 09:28:40.687086       1 service.go:275] Service provisioning-4508-9194/csi-hostpathplugin updated: 0 ports\nI0802 09:28:40.858554       1 service.go:275] Service volume-expand-5280-1128/csi-hostpathplugin updated: 1 ports\nI0802 09:28:41.090448       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-provisioner updated: 0 ports\nI0802 09:28:41.104198       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-attacher:dummy\" at 100.68.184.116:12345/TCP\nI0802 09:28:41.104230       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpathplugin:dummy\"\nI0802 09:28:41.104242       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpathplugin:dummy\" at 100.69.248.233:12345/TCP\nI0802 09:28:41.104250       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-provisioner:dummy\"\nI0802 09:28:41.104358       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:41.142545       1 proxier.go:826] syncProxyRules took 38.833183ms\nI0802 09:28:41.261488       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-provisioner updated: 1 ports\nI0802 09:28:41.485565       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-resizer updated: 0 ports\nI0802 09:28:41.740500       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-resizer updated: 1 ports\nI0802 09:28:41.951144       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:28:42.135872       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:28:42.141903       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-snapshotter:dummy\" at 100.65.54.148:12345/TCP\nI0802 09:28:42.141930       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-provisioner:dummy\" at 100.70.52.135:12345/TCP\nI0802 09:28:42.141947       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-resizer:dummy\"\nI0802 09:28:42.141961       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-resizer:dummy\" at 100.66.253.214:12345/TCP\nI0802 09:28:42.142012       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-snapshotter:dummy\"\nI0802 09:28:42.142169       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:42.186093       1 proxier.go:826] syncProxyRules took 50.17696ms\nI0802 09:28:43.188060       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:43.299621       1 proxier.go:826] syncProxyRules took 112.315978ms\nI0802 09:28:45.716402       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:45.744250       1 proxier.go:826] syncProxyRules took 28.390687ms\nI0802 09:28:47.117904       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:47.145741       1 proxier.go:826] syncProxyRules took 28.367108ms\nI0802 09:28:49.114448       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:49.142376       1 proxier.go:826] syncProxyRules took 28.481096ms\nI0802 09:28:49.514008       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:49.555586       1 proxier.go:826] syncProxyRules took 42.241524ms\nI0802 09:28:50.400407       1 service.go:275] Service services-878/service-proxy-toggled updated: 0 ports\nI0802 09:28:50.401103       1 service.go:415] Removing service port \"services-878/service-proxy-toggled\"\nI0802 09:28:50.401205       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:50.429658       1 proxier.go:826] syncProxyRules took 29.207076ms\nI0802 09:28:51.430524       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:51.462550       1 proxier.go:826] syncProxyRules took 32.755196ms\nI0802 09:28:57.608707       1 service.go:275] Service services-878/service-proxy-toggled updated: 1 ports\nI0802 09:28:57.610507       1 service.go:390] Adding new service port \"services-878/service-proxy-toggled\" at 100.69.249.190:80/TCP\nI0802 09:28:57.610628       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:57.663237       1 proxier.go:826] syncProxyRules took 54.491862ms\nI0802 09:28:57.664282       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:57.721341       1 proxier.go:826] syncProxyRules took 57.945442ms\nI0802 09:29:10.458105       1 service.go:275] Service webhook-6678/e2e-test-webhook updated: 1 ports\nI0802 09:29:10.458853       1 service.go:390] Adding new service port \"webhook-6678/e2e-test-webhook\" at 100.67.74.125:8443/TCP\nI0802 09:29:10.458949       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:10.500419       1 proxier.go:826] syncProxyRules took 42.27161ms\nI0802 09:29:10.501080       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:10.541807       1 proxier.go:826] syncProxyRules took 41.351224ms\nI0802 09:29:13.209164       1 service.go:275] Service webhook-6678/e2e-test-webhook updated: 0 ports\nI0802 09:29:13.209716       1 service.go:415] Removing service port \"webhook-6678/e2e-test-webhook\"\nI0802 09:29:13.209795       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:13.261242       1 proxier.go:826] syncProxyRules took 52.038874ms\nI0802 09:29:13.766401       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:13.794760       1 proxier.go:826] syncProxyRules took 28.929306ms\nI0802 09:29:18.934916       1 service.go:275] Service services-6068/multi-endpoint-test updated: 2 ports\nI0802 09:29:18.935703       1 service.go:390] Adding new service port \"services-6068/multi-endpoint-test:portname1\" at 100.66.255.138:80/TCP\nI0802 09:29:18.935738       1 service.go:390] Adding new service port \"services-6068/multi-endpoint-test:portname2\" at 100.66.255.138:81/TCP\nI0802 09:29:18.935807       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:18.982850       1 proxier.go:826] syncProxyRules took 47.901308ms\nI0802 09:29:18.983454       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:19.029628       1 proxier.go:826] syncProxyRules took 46.742921ms\nI0802 09:29:19.940337       1 service.go:275] Service services-878/service-proxy-toggled updated: 0 ports\nI0802 09:29:19.940996       1 service.go:415] Removing service port \"services-878/service-proxy-toggled\"\nI0802 09:29:19.941132       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:19.972366       1 proxier.go:826] syncProxyRules took 31.969348ms\nI0802 09:29:20.973153       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:21.001209       1 proxier.go:826] syncProxyRules took 28.712113ms\nI0802 09:29:22.002244       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:22.044657       1 proxier.go:826] syncProxyRules took 43.196654ms\nI0802 09:29:23.738801       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:23.792564       1 proxier.go:826] syncProxyRules took 54.347574ms\nI0802 09:29:24.514131       1 service.go:275] Service dns-4443/test-service-2 updated: 1 ports\nI0802 09:29:24.515023       1 service.go:390] Adding new service port \"dns-4443/test-service-2:http\" at 100.71.195.231:80/TCP\nI0802 09:29:24.515155       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:24.563920       1 proxier.go:826] syncProxyRules took 49.754745ms\nI0802 09:29:25.191791       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:25.220280       1 proxier.go:826] syncProxyRules took 29.102781ms\nI0802 09:29:26.258523       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:26.289317       1 proxier.go:826] syncProxyRules took 31.415034ms\nI0802 09:29:27.012938       1 service.go:275] Service services-6068/multi-endpoint-test updated: 0 ports\nI0802 09:29:27.013558       1 service.go:415] Removing service port \"services-6068/multi-endpoint-test:portname1\"\nI0802 09:29:27.013590       1 service.go:415] Removing service port \"services-6068/multi-endpoint-test:portname2\"\nI0802 09:29:27.013707       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:27.057563       1 proxier.go:826] syncProxyRules took 44.570702ms\nI0802 09:29:28.067865       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:28.116761       1 proxier.go:826] syncProxyRules took 49.692743ms\nI0802 09:29:43.212200       1 service.go:275] Service provisioning-16-2189/csi-hostpath-attacher updated: 1 ports\nI0802 09:29:43.212877       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-attacher:dummy\" at 100.64.227.251:12345/TCP\nI0802 09:29:43.213015       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:43.252857       1 proxier.go:826] syncProxyRules took 40.620033ms\nI0802 09:29:43.253658       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:43.302316       1 proxier.go:826] syncProxyRules took 49.419964ms\nI0802 09:29:43.793078       1 service.go:275] Service provisioning-16-2189/csi-hostpathplugin updated: 1 ports\nI0802 09:29:44.179704       1 service.go:275] Service provisioning-16-2189/csi-hostpath-provisioner updated: 1 ports\nI0802 09:29:44.303146       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-provisioner:dummy\" at 100.67.83.87:12345/TCP\nI0802 09:29:44.303190       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpathplugin:dummy\" at 100.68.142.127:12345/TCP\nI0802 09:29:44.303306       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:44.447789       1 proxier.go:826] syncProxyRules took 145.27127ms\nI0802 09:29:44.566940       1 service.go:275] Service provisioning-16-2189/csi-hostpath-resizer updated: 1 ports\nI0802 09:29:44.954963       1 service.go:275] Service provisioning-16-2189/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:29:45.448701       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-resizer:dummy\" at 100.64.33.107:12345/TCP\nI0802 09:29:45.448732       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-snapshotter:dummy\" at 100.67.177.99:12345/TCP\nI0802 09:29:45.448843       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:45.480447       1 proxier.go:826] syncProxyRules took 32.392682ms\nI0802 09:29:46.923367       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-attacher updated: 0 ports\nI0802 09:29:46.924217       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-attacher:dummy\"\nI0802 09:29:46.924321       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:46.982962       1 proxier.go:826] syncProxyRules took 59.434301ms\nI0802 09:29:47.506913       1 service.go:275] Service volume-expand-5280-1128/csi-hostpathplugin updated: 0 ports\nI0802 09:29:47.507500       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpathplugin:dummy\"\nI0802 09:29:47.507608       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:47.538204       1 proxier.go:826] syncProxyRules took 31.255491ms\nI0802 09:29:47.912765       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-provisioner updated: 0 ports\nI0802 09:29:48.308924       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-resizer updated: 0 ports\nI0802 09:29:48.309592       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-provisioner:dummy\"\nI0802 09:29:48.309613       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-resizer:dummy\"\nI0802 09:29:48.309751       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:48.358139       1 proxier.go:826] syncProxyRules took 49.177176ms\nI0802 09:29:48.705809       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:29:49.358842       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-snapshotter:dummy\"\nI0802 09:29:49.359021       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:49.411033       1 proxier.go:826] syncProxyRules took 52.761942ms\nI0802 09:29:50.581024       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:50.617848       1 proxier.go:826] syncProxyRules took 37.544916ms\nI0802 09:29:51.618533       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:51.657327       1 proxier.go:826] syncProxyRules took 39.32741ms\nI0802 09:30:05.294363       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:05.329199       1 proxier.go:826] syncProxyRules took 35.383639ms\nI0802 09:30:05.488245       1 service.go:275] Service dns-4443/test-service-2 updated: 0 ports\nI0802 09:30:05.488830       1 service.go:415] Removing service port \"dns-4443/test-service-2:http\"\nI0802 09:30:05.488926       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:05.517425       1 proxier.go:826] syncProxyRules took 29.136222ms\nI0802 09:30:06.518386       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:06.581265       1 proxier.go:826] syncProxyRules took 63.695707ms\nI0802 09:30:10.255812       1 service.go:275] Service volumemode-685-3249/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:10.256453       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-attacher:dummy\" at 100.71.70.1:12345/TCP\nI0802 09:30:10.256545       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:10.289533       1 proxier.go:826] syncProxyRules took 33.68236ms\nI0802 09:30:10.290091       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:10.319350       1 proxier.go:826] syncProxyRules took 29.778875ms\nI0802 09:30:10.838705       1 service.go:275] Service volumemode-685-3249/csi-hostpathplugin updated: 1 ports\nI0802 09:30:11.230102       1 service.go:275] Service volumemode-685-3249/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:11.320190       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpathplugin:dummy\" at 100.64.160.4:12345/TCP\nI0802 09:30:11.320225       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-provisioner:dummy\" at 100.64.110.51:12345/TCP\nI0802 09:30:11.320311       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:11.363712       1 proxier.go:826] syncProxyRules took 44.219588ms\nI0802 09:30:11.622710       1 service.go:275] Service volumemode-685-3249/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:12.012914       1 service.go:275] Service volumemode-685-3249/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:12.364696       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-snapshotter:dummy\" at 100.67.184.34:12345/TCP\nI0802 09:30:12.364734       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-resizer:dummy\" at 100.68.58.197:12345/TCP\nI0802 09:30:12.364822       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:12.418732       1 proxier.go:826] syncProxyRules took 54.731594ms\nI0802 09:30:14.197867       1 service.go:275] Service provisioning-16-2189/csi-hostpath-attacher updated: 0 ports\nI0802 09:30:14.198458       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-attacher:dummy\"\nI0802 09:30:14.198542       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:14.229441       1 proxier.go:826] syncProxyRules took 31.513012ms\nI0802 09:30:14.661212       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:14.703240       1 proxier.go:826] syncProxyRules took 42.732504ms\nI0802 09:30:14.798716       1 service.go:275] Service provisioning-16-2189/csi-hostpathplugin updated: 0 ports\nI0802 09:30:15.193672       1 service.go:275] Service provisioning-16-2189/csi-hostpath-provisioner updated: 0 ports\nI0802 09:30:15.590387       1 service.go:275] Service provisioning-16-2189/csi-hostpath-resizer updated: 0 ports\nI0802 09:30:15.590929       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-provisioner:dummy\"\nI0802 09:30:15.590952       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-resizer:dummy\"\nI0802 09:30:15.590963       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpathplugin:dummy\"\nI0802 09:30:15.591076       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:15.621297       1 proxier.go:826] syncProxyRules took 30.871895ms\nI0802 09:30:15.998465       1 service.go:275] Service provisioning-16-2189/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:30:16.440855       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-snapshotter:dummy\"\nI0802 09:30:16.441028       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:16.480687       1 proxier.go:826] syncProxyRules took 40.465093ms\nI0802 09:30:17.432787       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:17.475275       1 proxier.go:826] syncProxyRules took 42.987966ms\nI0802 09:30:18.475949       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:18.503734       1 proxier.go:826] syncProxyRules took 28.315981ms\nI0802 09:30:24.695096       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:24.695630       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-attacher:dummy\" at 100.69.156.225:12345/TCP\nI0802 09:30:24.695715       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:24.723755       1 proxier.go:826] syncProxyRules took 28.613047ms\nI0802 09:30:24.724280       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:24.753091       1 proxier.go:826] syncProxyRules took 29.302956ms\nI0802 09:30:25.273674       1 service.go:275] Service volumemode-5208-5908/csi-hostpathplugin updated: 1 ports\nI0802 09:30:25.682298       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:25.754016       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpathplugin:dummy\" at 100.71.116.224:12345/TCP\nI0802 09:30:25.754047       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-provisioner:dummy\" at 100.71.179.180:12345/TCP\nI0802 09:30:25.754158       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:25.803463       1 proxier.go:826] syncProxyRules took 50.213029ms\nI0802 09:30:26.047252       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:26.433217       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:26.804125       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-resizer:dummy\" at 100.70.139.57:12345/TCP\nI0802 09:30:26.804159       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-snapshotter:dummy\" at 100.65.43.24:12345/TCP\nI0802 09:30:26.804240       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:26.835119       1 proxier.go:826] syncProxyRules took 31.522189ms\nI0802 09:30:29.914073       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:29.914640       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-attacher:dummy\" at 100.65.85.203:12345/TCP\nI0802 09:30:29.914741       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:29.943967       1 proxier.go:826] syncProxyRules took 29.856571ms\nI0802 09:30:29.944778       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:29.974200       1 proxier.go:826] syncProxyRules took 30.179428ms\nI0802 09:30:30.499050       1 service.go:275] Service ephemeral-2872-6578/csi-hostpathplugin updated: 1 ports\nI0802 09:30:30.884712       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:30.974856       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpathplugin:dummy\" at 100.67.116.240:12345/TCP\nI0802 09:30:30.974892       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-provisioner:dummy\" at 100.68.12.225:12345/TCP\nI0802 09:30:30.975025       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:31.012330       1 proxier.go:826] syncProxyRules took 37.986186ms\nI0802 09:30:31.276025       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:31.667679       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:32.012943       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-resizer:dummy\" at 100.67.86.111:12345/TCP\nI0802 09:30:32.013009       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-snapshotter:dummy\" at 100.65.59.49:12345/TCP\nI0802 09:30:32.013087       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:32.041785       1 proxier.go:826] syncProxyRules took 29.320097ms\nI0802 09:30:33.973331       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:34.006837       1 proxier.go:826] syncProxyRules took 34.026399ms\nI0802 09:30:34.373152       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:34.403408       1 proxier.go:826] syncProxyRules took 30.782654ms\nI0802 09:30:35.569389       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:35.598863       1 proxier.go:826] syncProxyRules took 30.15146ms\nI0802 09:30:35.973724       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:36.004728       1 proxier.go:826] syncProxyRules took 31.485483ms\nI0802 09:30:38.376751       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:38.409026       1 proxier.go:826] syncProxyRules took 32.776153ms\nI0802 09:30:38.779750       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:38.821248       1 proxier.go:826] syncProxyRules took 41.998281ms\nI0802 09:30:39.579251       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:39.633175       1 proxier.go:826] syncProxyRules took 54.566691ms\nI0802 09:30:41.375929       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:41.409656       1 proxier.go:826] syncProxyRules took 34.257297ms\nI0802 09:30:41.781312       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:41.845587       1 proxier.go:826] syncProxyRules took 65.834583ms\nI0802 09:30:44.964295       1 service.go:275] Service kubectl-8312/agnhost-primary updated: 1 ports\nI0802 09:30:44.964993       1 service.go:390] Adding new service port \"kubectl-8312/agnhost-primary\" at 100.69.109.236:6379/TCP\nI0802 09:30:44.965075       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:45.009442       1 proxier.go:826] syncProxyRules took 45.113282ms\nI0802 09:30:45.010090       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:45.043599       1 proxier.go:826] syncProxyRules took 34.11988ms\nI0802 09:30:46.044507       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:46.095857       1 proxier.go:826] syncProxyRules took 52.109558ms\nI0802 09:30:46.704225       1 service.go:275] Service volumemode-685-3249/csi-hostpath-attacher updated: 0 ports\nI0802 09:30:47.096492       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-attacher:dummy\"\nI0802 09:30:47.096593       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:47.134662       1 proxier.go:826] syncProxyRules took 38.660239ms\nI0802 09:30:47.293571       1 service.go:275] Service volumemode-685-3249/csi-hostpathplugin updated: 0 ports\nI0802 09:30:47.695112       1 service.go:275] Service volumemode-685-3249/csi-hostpath-provisioner updated: 0 ports\nI0802 09:30:48.094148       1 service.go:275] Service volumemode-685-3249/csi-hostpath-resizer updated: 0 ports\nI0802 09:30:48.094897       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpathplugin:dummy\"\nI0802 09:30:48.094922       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-provisioner:dummy\"\nI0802 09:30:48.094932       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-resizer:dummy\"\nI0802 09:30:48.095067       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:48.128571       1 proxier.go:826] syncProxyRules took 34.388201ms\nI0802 09:30:48.496655       1 service.go:275] Service volumemode-685-3249/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:30:49.129364       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-snapshotter:dummy\"\nI0802 09:30:49.129486       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:49.158534       1 proxier.go:826] syncProxyRules took 29.725609ms\nI0802 09:30:58.832431       1 service.go:275] Service kubectl-8312/agnhost-primary updated: 0 ports\nI0802 09:30:58.833321       1 service.go:415] Removing service port \"kubectl-8312/agnhost-primary\"\nI0802 09:30:58.833435       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:58.890895       1 proxier.go:826] syncProxyRules took 58.385198ms\nI0802 09:30:58.896452       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:58.937840       1 proxier.go:826] syncProxyRules took 46.902301ms\nI0802 09:31:07.359765       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-attacher updated: 0 ports\nI0802 09:31:07.360337       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-attacher:dummy\"\nI0802 09:31:07.360432       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:07.389908       1 proxier.go:826] syncProxyRules took 30.100249ms\nI0802 09:31:07.390572       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:07.420401       1 proxier.go:826] syncProxyRules took 30.446674ms\nI0802 09:31:07.942480       1 service.go:275] Service volumemode-5208-5908/csi-hostpathplugin updated: 0 ports\nI0802 09:31:08.331522       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-provisioner updated: 0 ports\nI0802 09:31:08.421183       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpathplugin:dummy\"\nI0802 09:31:08.421215       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-provisioner:dummy\"\nI0802 09:31:08.421330       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:08.464570       1 proxier.go:826] syncProxyRules took 44.03988ms\nI0802 09:31:08.720771       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-resizer updated: 0 ports\nI0802 09:31:09.111850       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:31:09.465207       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-resizer:dummy\"\nI0802 09:31:09.465248       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-snapshotter:dummy\"\nI0802 09:31:09.465338       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:09.496358       1 proxier.go:826] syncProxyRules took 31.658532ms\nI0802 09:31:39.571954       1 service.go:275] Service services-2137/endpoint-test2 updated: 1 ports\nI0802 09:31:39.572379       1 service.go:390] Adding new service port \"services-2137/endpoint-test2\" at 100.65.50.254:80/TCP\nI0802 09:31:39.572437       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:39.602040       1 proxier.go:826] syncProxyRules took 30.032593ms\nI0802 09:31:39.602508       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:39.631397       1 proxier.go:826] syncProxyRules took 29.321193ms\nI0802 09:31:39.873996       1 service.go:275] Service services-295/tolerate-unready updated: 1 ports\nI0802 09:31:40.586532       1 service.go:390] Adding new service port \"services-295/tolerate-unready:http\" at 100.67.114.99:80/TCP\nI0802 09:31:40.586645       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:40.621482       1 proxier.go:826] syncProxyRules took 35.566922ms\nI0802 09:31:48.986092       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:49.015701       1 proxier.go:826] syncProxyRules took 30.317213ms\nI0802 09:31:53.386324       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:53.435475       1 proxier.go:826] syncProxyRules took 50.005611ms\nI0802 09:31:54.883838       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:54.912626       1 proxier.go:826] syncProxyRules took 29.428068ms\nI0802 09:31:55.870771       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:55.901351       1 proxier.go:826] syncProxyRules took 31.23155ms\nI0802 09:31:55.902170       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:55.937844       1 proxier.go:826] syncProxyRules took 36.45733ms\nI0802 09:31:56.943625       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:56.989420       1 proxier.go:826] syncProxyRules took 51.419272ms\nI0802 09:31:57.637198       1 service.go:275] Service services-2137/endpoint-test2 updated: 0 ports\nI0802 09:31:57.990210       1 service.go:415] Removing service port \"services-2137/endpoint-test2\"\nI0802 09:31:57.990329       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:58.018573       1 proxier.go:826] syncProxyRules took 29.005971ms\nI0802 09:31:58.998785       1 service.go:275] Service services-295/tolerate-unready updated: 0 ports\nI0802 09:31:58.999544       1 service.go:415] Removing service port \"services-295/tolerate-unready:http\"\nI0802 09:31:58.999654       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:59.028530       1 proxier.go:826] syncProxyRules took 29.682934ms\nI0802 09:32:00.029297       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:00.057949       1 proxier.go:826] syncProxyRules took 29.274182ms\nI0802 09:32:00.584604       1 service.go:275] Service services-9578/hairpin-test updated: 1 ports\nI0802 09:32:01.058812       1 service.go:390] Adding new service port \"services-9578/hairpin-test\" at 100.71.179.245:8080/TCP\nI0802 09:32:01.058949       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:01.113289       1 proxier.go:826] syncProxyRules took 55.166227ms\nI0802 09:32:01.994562       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:02.049393       1 proxier.go:826] syncProxyRules took 55.621371ms\nI0802 09:32:12.441123       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-attacher updated: 0 ports\nI0802 09:32:12.441747       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-attacher:dummy\"\nI0802 09:32:12.441844       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:12.469992       1 proxier.go:826] syncProxyRules took 28.805439ms\nI0802 09:32:12.630759       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-attacher updated: 1 ports\nI0802 09:32:12.631282       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-attacher:dummy\" at 100.68.103.208:12345/TCP\nI0802 09:32:12.631370       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:12.675464       1 proxier.go:826] syncProxyRules took 44.668793ms\nI0802 09:32:13.042162       1 service.go:275] Service ephemeral-2872-6578/csi-hostpathplugin updated: 0 ports\nI0802 09:32:13.207808       1 service.go:275] Service volume-expand-8616-827/csi-hostpathplugin updated: 1 ports\nI0802 09:32:13.435472       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-provisioner updated: 0 ports\nI0802 09:32:13.447946       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpathplugin:dummy\"\nI0802 09:32:13.448016       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpathplugin:dummy\" at 100.66.147.83:12345/TCP\nI0802 09:32:13.448026       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-provisioner:dummy\"\nI0802 09:32:13.448117       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:13.476336       1 proxier.go:826] syncProxyRules took 28.819199ms\nI0802 09:32:13.595087       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-provisioner updated: 1 ports\nI0802 09:32:13.830281       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-resizer updated: 0 ports\nI0802 09:32:13.981443       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-resizer updated: 1 ports\nI0802 09:32:14.243112       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:32:14.374369       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:32:14.476950       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-snapshotter:dummy\"\nI0802 09:32:14.477026       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-snapshotter:dummy\" at 100.65.206.78:12345/TCP\nI0802 09:32:14.477041       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-provisioner:dummy\" at 100.69.16.155:12345/TCP\nI0802 09:32:14.477050       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-resizer:dummy\"\nI0802 09:32:14.477073       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-resizer:dummy\" at 100.65.68.228:12345/TCP\nI0802 09:32:14.477177       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:14.517424       1 proxier.go:826] syncProxyRules took 40.988841ms\nI0802 09:32:14.548304       1 service.go:275] Service services-9578/hairpin-test updated: 0 ports\nI0802 09:32:15.518084       1 service.go:415] Removing service port \"services-9578/hairpin-test\"\nI0802 09:32:15.518211       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:15.545920       1 proxier.go:826] syncProxyRules took 28.343317ms\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-35-97.ap-southeast-2.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-43-68.ap-southeast-2.compute.internal ====\nI0802 09:25:17.762935       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0802 09:25:17.766172       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0802 09:25:17.766189       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0802 09:25:17.766203       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0802 09:25:17.766210       1 flags.go:59] FLAG: --cleanup=\"false\"\nI0802 09:25:17.766214       1 flags.go:59] FLAG: --cleanup-ipvs=\"true\"\nI0802 09:25:17.766218       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0802 09:25:17.766224       1 flags.go:59] FLAG: --config=\"\"\nI0802 09:25:17.766228       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0802 09:25:17.766234       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0802 09:25:17.766241       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0802 09:25:17.766245       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0802 09:25:17.766250       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0802 09:25:17.766254       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0802 09:25:17.766260       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0802 09:25:17.766266       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0802 09:25:17.766272       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0802 09:25:17.766276       1 flags.go:59] FLAG: --help=\"false\"\nI0802 09:25:17.766280       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-43-68.ap-southeast-2.compute.internal\"\nI0802 09:25:17.766286       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0802 09:25:17.766289       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0802 09:25:17.766294       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0802 09:25:17.766298       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0802 09:25:17.766307       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0802 09:25:17.766311       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0802 09:25:17.766316       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0802 09:25:17.766319       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0802 09:25:17.766323       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0802 09:25:17.766327       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0802 09:25:17.766331       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0802 09:25:17.766335       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0802 09:25:17.766339       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0802 09:25:17.766346       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0802 09:25:17.766353       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0802 09:25:17.766357       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0802 09:25:17.766366       1 flags.go:59] FLAG: --log-dir=\"\"\nI0802 09:25:17.766371       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0802 09:25:17.766375       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0802 09:25:17.766380       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0802 09:25:17.766384       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0802 09:25:17.766388       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0802 09:25:17.766392       1 flags.go:59] FLAG: --master=\"https://127.0.0.1\"\nI0802 09:25:17.766396       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0802 09:25:17.766399       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0802 09:25:17.766403       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0802 09:25:17.766409       1 flags.go:59] FLAG: --one-output=\"false\"\nI0802 09:25:17.766413       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0802 09:25:17.766417       1 flags.go:59] FLAG: --profiling=\"false\"\nI0802 09:25:17.766420       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0802 09:25:17.766426       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0802 09:25:17.766431       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0802 09:25:17.766435       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0802 09:25:17.766438       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0802 09:25:17.766442       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0802 09:25:17.766447       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0802 09:25:17.766451       1 flags.go:59] FLAG: --v=\"2\"\nI0802 09:25:17.766455       1 flags.go:59] FLAG: --version=\"false\"\nI0802 09:25:17.766462       1 flags.go:59] FLAG: --vmodule=\"\"\nI0802 09:25:17.766466       1 flags.go:59] FLAG: --write-config-to=\"\"\nW0802 09:25:17.768014       1 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0802 09:25:17.769814       1 feature_gate.go:243] feature gates: &{map[]}\nI0802 09:25:17.769897       1 feature_gate.go:243] feature gates: &{map[]}\nE0802 09:25:17.892139       1 node.go:161] Failed to retrieve node info: Get \"https://127.0.0.1/api/v1/nodes/ip-172-20-43-68.ap-southeast-2.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused\nE0802 09:25:24.827386       1 node.go:161] Failed to retrieve node info: nodes \"ip-172-20-43-68.ap-southeast-2.compute.internal\" is forbidden: User \"system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope\nI0802 09:25:26.959168       1 node.go:172] Successfully retrieved node IP: 172.20.43.68\nI0802 09:25:26.959202       1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.20.43.68), assume IPv4 operation\nW0802 09:25:27.024353       1 server_others.go:584] Unknown proxy mode \"\", assuming iptables proxy\nI0802 09:25:27.024468       1 server_others.go:182] DetectLocalMode: 'ClusterCIDR'\nI0802 09:25:27.024482       1 server_others.go:185] Using iptables Proxier.\nI0802 09:25:27.024562       1 utils.go:321] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0802 09:25:27.024630       1 proxier.go:287] iptables(IPv4) masquerade mark: 0x00004000\nI0802 09:25:27.024688       1 proxier.go:334] iptables(IPv4) sync params: minSyncPeriod=1s, syncPeriod=30s, burstSyncs=2\nI0802 09:25:27.024735       1 proxier.go:346] iptables(IPv4) supports --random-fully\nI0802 09:25:27.025705       1 server.go:650] Version: v1.20.9\nI0802 09:25:27.026847       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI0802 09:25:27.026959       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0802 09:25:27.027134       1 mount_linux.go:188] Detected OS without systemd\nI0802 09:25:27.028013       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI0802 09:25:27.031061       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0802 09:25:27.031224       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0802 09:25:27.033347       1 reflector.go:219] Starting reflector *v1.Service (15m0s) from k8s.io/client-go/informers/factory.go:134\nI0802 09:25:27.033457       1 reflector.go:219] Starting reflector *v1beta1.EndpointSlice (15m0s) from k8s.io/client-go/informers/factory.go:134\nI0802 09:25:27.037219       1 config.go:315] Starting service config controller\nI0802 09:25:27.037293       1 shared_informer.go:240] Waiting for caches to sync for service config\nI0802 09:25:27.037365       1 config.go:224] Starting endpoint slice config controller\nI0802 09:25:27.037397       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0802 09:25:27.044690       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 1 ports\nI0802 09:25:27.044795       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-resizer updated: 1 ports\nI0802 09:25:27.045025       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:25:27.045103       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-provisioner updated: 1 ports\nI0802 09:25:27.045206       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-resizer updated: 1 ports\nI0802 09:25:27.045267       1 service.go:275] Service default/kubernetes updated: 1 ports\nI0802 09:25:27.045344       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-provisioner updated: 1 ports\nI0802 09:25:27.045416       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-attacher updated: 1 ports\nI0802 09:25:27.045822       1 service.go:275] Service kube-system/kube-dns updated: 3 ports\nI0802 09:25:27.045929       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-attacher updated: 1 ports\nI0802 09:25:27.046019       1 service.go:275] Service ephemeral-9710-9555/csi-hostpathplugin updated: 1 ports\nI0802 09:25:27.046096       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:25:27.046166       1 service.go:275] Service volume-expand-9751-6514/csi-hostpathplugin updated: 1 ports\nI0802 09:25:27.137424       1 shared_informer.go:247] Caches are synced for service config \nI0802 09:25:27.137552       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0802 09:25:27.138732       1 proxier.go:818] Not syncing iptables until Services and Endpoints have been received from master\nI0802 09:25:27.138939       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-provisioner:dummy\" at 100.64.220.139:12345/TCP\nI0802 09:25:27.138962       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-attacher:dummy\" at 100.67.215.140:12345/TCP\nI0802 09:25:27.138973       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-attacher:dummy\" at 100.67.115.219:12345/TCP\nI0802 09:25:27.139009       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpathplugin:dummy\" at 100.70.173.208:12345/TCP\nI0802 09:25:27.139018       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-snapshotter:dummy\" at 100.71.86.20:12345/TCP\nI0802 09:25:27.139030       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpathplugin:dummy\" at 100.71.58.236:12345/TCP\nI0802 09:25:27.139040       1 service.go:390] Adding new service port \"deployment-4314/test-rolling-update-with-lb\" at 100.71.104.248:80/TCP\nI0802 09:25:27.139050       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-resizer:dummy\" at 100.65.122.98:12345/TCP\nI0802 09:25:27.139060       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-provisioner:dummy\" at 100.67.64.219:12345/TCP\nI0802 09:25:27.139075       1 service.go:390] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0802 09:25:27.139085       1 service.go:390] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0802 09:25:27.139094       1 service.go:390] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0802 09:25:27.139114       1 service.go:390] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0802 09:25:27.139133       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-resizer:dummy\" at 100.65.129.225:12345/TCP\nI0802 09:25:27.139148       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-snapshotter:dummy\" at 100.69.208.138:12345/TCP\nI0802 09:25:27.139333       1 proxier.go:858] Stale udp service kube-system/kube-dns:dns -> 100.64.0.10\nI0802 09:25:27.139359       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:27.171441       1 proxier.go:1715] Opened local port \"nodePort for deployment-4314/test-rolling-update-with-lb\" (:31127/tcp)\nI0802 09:25:27.190882       1 service_health.go:98] Opening healthcheck \"deployment-4314/test-rolling-update-with-lb\" on port 31777\nI0802 09:25:27.195118       1 proxier.go:826] syncProxyRules took 56.343888ms\nI0802 09:25:53.879126       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:53.921140       1 proxier.go:826] syncProxyRules took 42.262738ms\nI0802 09:25:53.921447       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:53.966733       1 proxier.go:826] syncProxyRules took 45.510806ms\nI0802 09:25:55.182579       1 service.go:275] Service services-1470/up-down-1 updated: 1 ports\nI0802 09:25:55.182886       1 service.go:390] Adding new service port \"services-1470/up-down-1\" at 100.70.4.215:80/TCP\nI0802 09:25:55.183559       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:55.285622       1 proxier.go:826] syncProxyRules took 103.004979ms\nI0802 09:25:56.287052       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:56.355618       1 proxier.go:826] syncProxyRules took 68.870607ms\nI0802 09:25:56.778024       1 service.go:275] Service kubectl-4027/agnhost-replica updated: 1 ports\nI0802 09:25:57.356073       1 service.go:390] Adding new service port \"kubectl-4027/agnhost-replica\" at 100.71.189.13:6379/TCP\nI0802 09:25:57.356215       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:57.391272       1 proxier.go:826] syncProxyRules took 35.394289ms\nI0802 09:25:58.394281       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:58.436209       1 proxier.go:826] syncProxyRules took 42.168065ms\nI0802 09:25:58.558046       1 service.go:275] Service kubectl-4027/agnhost-primary updated: 1 ports\nI0802 09:25:59.301960       1 service.go:390] Adding new service port \"kubectl-4027/agnhost-primary\" at 100.66.15.222:6379/TCP\nI0802 09:25:59.302172       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:59.384718       1 proxier.go:826] syncProxyRules took 82.967903ms\nI0802 09:25:59.889688       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:59.953898       1 proxier.go:826] syncProxyRules took 64.415394ms\nI0802 09:26:00.353659       1 service.go:275] Service kubectl-4027/frontend updated: 1 ports\nI0802 09:26:00.954146       1 service.go:390] Adding new service port \"kubectl-4027/frontend\" at 100.67.213.81:80/TCP\nI0802 09:26:00.954293       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:00.979874       1 proxier.go:826] syncProxyRules took 25.904006ms\nI0802 09:26:01.285667       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-attacher updated: 1 ports\nI0802 09:26:01.875903       1 service.go:275] Service volume-expand-5125-7028/csi-hostpathplugin updated: 1 ports\nI0802 09:26:01.886382       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-attacher:dummy\" at 100.68.29.179:12345/TCP\nI0802 09:26:01.886410       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpathplugin:dummy\" at 100.68.226.200:12345/TCP\nI0802 09:26:01.886461       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:01.941962       1 proxier.go:826] syncProxyRules took 55.761609ms\nI0802 09:26:02.262916       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-provisioner updated: 1 ports\nI0802 09:26:02.662952       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-resizer updated: 1 ports\nI0802 09:26:02.942575       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-resizer:dummy\" at 100.66.76.190:12345/TCP\nI0802 09:26:02.942608       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-provisioner:dummy\" at 100.66.131.101:12345/TCP\nI0802 09:26:02.942866       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:02.979123       1 proxier.go:826] syncProxyRules took 36.974341ms\nI0802 09:26:03.050368       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:26:03.983557       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-snapshotter:dummy\" at 100.69.27.176:12345/TCP\nI0802 09:26:03.983640       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:04.049109       1 proxier.go:826] syncProxyRules took 69.7109ms\nI0802 09:26:04.963050       1 service.go:275] Service services-1470/up-down-2 updated: 1 ports\nI0802 09:26:04.963426       1 service.go:390] Adding new service port \"services-1470/up-down-2\" at 100.70.71.58:80/TCP\nI0802 09:26:04.963624       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:04.996845       1 proxier.go:826] syncProxyRules took 33.760758ms\nI0802 09:26:05.903725       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:05.987637       1 service.go:275] Service webhook-9862/e2e-test-webhook updated: 1 ports\nI0802 09:26:06.034107       1 proxier.go:826] syncProxyRules took 130.607719ms\nI0802 09:26:06.895062       1 service.go:390] Adding new service port \"webhook-9862/e2e-test-webhook\" at 100.66.227.222:8443/TCP\nI0802 09:26:06.895141       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:06.949637       1 proxier.go:826] syncProxyRules took 54.789084ms\nI0802 09:26:07.950312       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:08.058563       1 proxier.go:826] syncProxyRules took 108.80534ms\nI0802 09:26:09.033386       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:09.069726       1 proxier.go:826] syncProxyRules took 36.584953ms\nI0802 09:26:10.070064       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:10.119073       1 proxier.go:826] syncProxyRules took 49.237197ms\nI0802 09:26:10.886172       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:10.912154       1 proxier.go:826] syncProxyRules took 26.253707ms\nI0802 09:26:11.320767       1 service.go:275] Service webhook-9862/e2e-test-webhook updated: 0 ports\nI0802 09:26:11.912406       1 service.go:415] Removing service port \"webhook-9862/e2e-test-webhook\"\nI0802 09:26:11.912553       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:11.939635       1 proxier.go:826] syncProxyRules took 27.368321ms\nI0802 09:26:12.939947       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:12.965719       1 proxier.go:826] syncProxyRules took 25.97909ms\nI0802 09:26:13.966239       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:14.011033       1 proxier.go:826] syncProxyRules took 45.065834ms\nI0802 09:26:14.986534       1 service.go:275] Service kubectl-4027/agnhost-replica updated: 0 ports\nI0802 09:26:14.987879       1 service.go:415] Removing service port \"kubectl-4027/agnhost-replica\"\nI0802 09:26:14.988055       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:15.031733       1 proxier.go:826] syncProxyRules took 44.018606ms\nI0802 09:26:15.741170       1 service.go:275] Service services-5870/service-headless-toggled updated: 1 ports\nI0802 09:26:15.870818       1 service.go:275] Service kubectl-4027/agnhost-primary updated: 0 ports\nI0802 09:26:15.901593       1 service.go:390] Adding new service port \"services-5870/service-headless-toggled\" at 100.66.82.148:80/TCP\nI0802 09:26:15.901621       1 service.go:415] Removing service port \"kubectl-4027/agnhost-primary\"\nI0802 09:26:15.901701       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:16.020776       1 proxier.go:826] syncProxyRules took 119.347922ms\nI0802 09:26:16.794605       1 service.go:275] Service kubectl-4027/frontend updated: 0 ports\nI0802 09:26:17.021295       1 service.go:415] Removing service port \"kubectl-4027/frontend\"\nI0802 09:26:17.021386       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:17.087150       1 proxier.go:826] syncProxyRules took 66.047178ms\nI0802 09:26:18.176621       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:18.271950       1 proxier.go:826] syncProxyRules took 95.549284ms\nI0802 09:26:18.895333       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:18.935675       1 proxier.go:826] syncProxyRules took 40.590409ms\nI0802 09:26:31.268412       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 0 ports\nI0802 09:26:31.270105       1 service.go:415] Removing service port \"deployment-4314/test-rolling-update-with-lb\"\nI0802 09:26:31.270221       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:31.307051       1 service_health.go:83] Closing healthcheck \"deployment-4314/test-rolling-update-with-lb\" on port 31777\nI0802 09:26:31.308242       1 proxier.go:826] syncProxyRules took 38.940404ms\nI0802 09:26:36.844475       1 service.go:275] Service services-5870/service-headless-toggled updated: 0 ports\nI0802 09:26:36.844751       1 service.go:415] Removing service port \"services-5870/service-headless-toggled\"\nI0802 09:26:36.844823       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:36.872006       1 proxier.go:826] syncProxyRules took 27.495048ms\nI0802 09:26:39.420376       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:39.454357       1 proxier.go:826] syncProxyRules took 34.183561ms\nI0802 09:26:40.422468       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:40.447319       1 proxier.go:826] syncProxyRules took 25.095986ms\nI0802 09:26:42.879684       1 service.go:275] Service webhook-3975/e2e-test-webhook updated: 1 ports\nI0802 09:26:42.879993       1 service.go:390] Adding new service port \"webhook-3975/e2e-test-webhook\" at 100.69.160.98:8443/TCP\nI0802 09:26:42.880071       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:42.940846       1 proxier.go:826] syncProxyRules took 61.127048ms\nI0802 09:26:42.942084       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:43.015217       1 proxier.go:826] syncProxyRules took 74.335365ms\nI0802 09:26:44.028088       1 service.go:275] Service services-5870/service-headless-toggled updated: 1 ports\nI0802 09:26:44.028459       1 service.go:390] Adding new service port \"services-5870/service-headless-toggled\" at 100.66.82.148:80/TCP\nI0802 09:26:44.028551       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:44.053487       1 proxier.go:826] syncProxyRules took 25.37088ms\nI0802 09:26:46.299022       1 service.go:275] Service services-1470/up-down-1 updated: 0 ports\nI0802 09:26:46.299371       1 service.go:415] Removing service port \"services-1470/up-down-1\"\nI0802 09:26:46.299571       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:46.362421       1 proxier.go:826] syncProxyRules took 63.223335ms\nI0802 09:26:46.908522       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:46.947221       1 proxier.go:826] syncProxyRules took 39.008269ms\nI0802 09:26:47.430125       1 service.go:275] Service webhook-3975/e2e-test-webhook updated: 0 ports\nI0802 09:26:47.430445       1 service.go:415] Removing service port \"webhook-3975/e2e-test-webhook\"\nI0802 09:26:47.430541       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:47.481069       1 proxier.go:826] syncProxyRules took 50.910119ms\nI0802 09:26:48.481439       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:48.518805       1 proxier.go:826] syncProxyRules took 37.581086ms\nI0802 09:27:03.388371       1 service.go:275] Service services-1470/up-down-3 updated: 1 ports\nI0802 09:27:03.389024       1 service.go:390] Adding new service port \"services-1470/up-down-3\" at 100.64.1.233:80/TCP\nI0802 09:27:03.389572       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:03.420694       1 proxier.go:826] syncProxyRules took 32.28568ms\nI0802 09:27:03.421112       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:03.451953       1 proxier.go:826] syncProxyRules took 31.083098ms\nI0802 09:27:05.571415       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:05.601019       1 proxier.go:826] syncProxyRules took 30.014945ms\nI0802 09:27:05.751251       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:05.793144       1 proxier.go:826] syncProxyRules took 42.104104ms\nI0802 09:27:06.257833       1 service.go:275] Service services-5870/service-headless-toggled updated: 0 ports\nI0802 09:27:06.793506       1 service.go:415] Removing service port \"services-5870/service-headless-toggled\"\nI0802 09:27:06.793761       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:06.840895       1 proxier.go:826] syncProxyRules took 47.622471ms\nI0802 09:27:08.952785       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-attacher updated: 1 ports\nI0802 09:27:08.953143       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-attacher:dummy\" at 100.64.190.82:12345/TCP\nI0802 09:27:08.953262       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:09.008489       1 proxier.go:826] syncProxyRules took 55.597825ms\nI0802 09:27:09.008844       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:09.060374       1 proxier.go:826] syncProxyRules took 51.848628ms\nI0802 09:27:09.531241       1 service.go:275] Service provisioning-2971-8427/csi-hostpathplugin updated: 1 ports\nI0802 09:27:09.918785       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-provisioner updated: 1 ports\nI0802 09:27:10.064055       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpathplugin:dummy\" at 100.65.218.93:12345/TCP\nI0802 09:27:10.064093       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-provisioner:dummy\" at 100.66.137.241:12345/TCP\nI0802 09:27:10.064148       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:10.095861       1 proxier.go:826] syncProxyRules took 32.06982ms\nI0802 09:27:10.304583       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-resizer updated: 1 ports\nI0802 09:27:10.695398       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:27:11.096191       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-resizer:dummy\" at 100.67.174.173:12345/TCP\nI0802 09:27:11.096216       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-snapshotter:dummy\" at 100.68.17.221:12345/TCP\nI0802 09:27:11.096328       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:11.121535       1 proxier.go:826] syncProxyRules took 25.510509ms\nI0802 09:27:17.835867       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:17.864202       1 proxier.go:826] syncProxyRules took 28.541747ms\nI0802 09:27:18.831776       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:18.873371       1 proxier.go:826] syncProxyRules took 41.850193ms\nI0802 09:27:19.835702       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-attacher updated: 0 ports\nI0802 09:27:19.836074       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-attacher:dummy\"\nI0802 09:27:19.836222       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:19.872272       1 proxier.go:826] syncProxyRules took 36.463685ms\nI0802 09:27:20.305512       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:20.333305       1 proxier.go:826] syncProxyRules took 28.102467ms\nI0802 09:27:20.420478       1 service.go:275] Service volume-expand-5125-7028/csi-hostpathplugin updated: 0 ports\nI0802 09:27:20.812752       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-provisioner updated: 0 ports\nI0802 09:27:21.204265       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-resizer updated: 0 ports\nI0802 09:27:21.205319       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpathplugin:dummy\"\nI0802 09:27:21.205361       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-provisioner:dummy\"\nI0802 09:27:21.205372       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-resizer:dummy\"\nI0802 09:27:21.213350       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:21.259043       1 proxier.go:826] syncProxyRules took 54.742081ms\nI0802 09:27:21.597152       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:27:22.259392       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-snapshotter:dummy\"\nI0802 09:27:22.259567       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:22.284337       1 proxier.go:826] syncProxyRules took 25.118072ms\nI0802 09:27:23.432274       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:23.458018       1 proxier.go:826] syncProxyRules took 25.997015ms\nI0802 09:27:23.708265       1 service.go:275] Service webhook-8390/e2e-test-webhook updated: 1 ports\nI0802 09:27:24.458310       1 service.go:390] Adding new service port \"webhook-8390/e2e-test-webhook\" at 100.66.116.29:8443/TCP\nI0802 09:27:24.458405       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:24.483905       1 proxier.go:826] syncProxyRules took 25.765389ms\nI0802 09:27:31.644169       1 service.go:275] Service dns-5822/dns-test-service-3 updated: 1 ports\nI0802 09:27:31.644474       1 service.go:390] Adding new service port \"dns-5822/dns-test-service-3:http\" at 100.65.38.43:80/TCP\nI0802 09:27:31.645022       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:31.683731       1 proxier.go:826] syncProxyRules took 39.448935ms\nI0802 09:27:33.141610       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:33.181283       1 service.go:275] Service services-1470/up-down-2 updated: 0 ports\nI0802 09:27:33.205153       1 service.go:275] Service services-1470/up-down-3 updated: 0 ports\nI0802 09:27:33.221887       1 proxier.go:826] syncProxyRules took 80.515252ms\nI0802 09:27:33.222144       1 service.go:415] Removing service port \"services-1470/up-down-2\"\nI0802 09:27:33.222163       1 service.go:415] Removing service port \"services-1470/up-down-3\"\nI0802 09:27:33.222358       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:33.262214       1 proxier.go:826] syncProxyRules took 40.297349ms\nI0802 09:27:34.262451       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:34.287080       1 proxier.go:826] syncProxyRules took 24.81283ms\nI0802 09:27:37.363426       1 service.go:275] Service dns-5822/dns-test-service-3 updated: 0 ports\nI0802 09:27:37.364280       1 service.go:415] Removing service port \"dns-5822/dns-test-service-3:http\"\nI0802 09:27:37.364430       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:37.391326       1 proxier.go:826] syncProxyRules took 27.218478ms\nI0802 09:27:38.408039       1 service.go:275] Service webhook-8390/e2e-test-webhook updated: 0 ports\nI0802 09:27:38.408344       1 service.go:415] Removing service port \"webhook-8390/e2e-test-webhook\"\nI0802 09:27:38.409283       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:38.443452       1 proxier.go:826] syncProxyRules took 35.309086ms\nI0802 09:27:38.443694       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:38.470737       1 proxier.go:826] syncProxyRules took 27.256975ms\nI0802 09:27:39.871725       1 service.go:275] Service webhook-4152/e2e-test-webhook updated: 1 ports\nI0802 09:27:39.872534       1 service.go:390] Adding new service port \"webhook-4152/e2e-test-webhook\" at 100.66.210.237:8443/TCP\nI0802 09:27:39.872962       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:39.914403       1 proxier.go:826] syncProxyRules took 42.043739ms\nI0802 09:27:40.914722       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:40.939110       1 proxier.go:826] syncProxyRules took 24.598877ms\nI0802 09:27:42.978818       1 service.go:275] Service webhook-4152/e2e-test-webhook updated: 0 ports\nI0802 09:27:42.979576       1 service.go:415] Removing service port \"webhook-4152/e2e-test-webhook\"\nI0802 09:27:42.979763       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:43.056040       1 proxier.go:826] syncProxyRules took 76.662165ms\nI0802 09:27:43.450502       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:43.492653       1 proxier.go:826] syncProxyRules took 42.345376ms\nI0802 09:28:09.154726       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-attacher updated: 1 ports\nI0802 09:28:09.155143       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-attacher:dummy\" at 100.65.180.222:12345/TCP\nI0802 09:28:09.155437       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:09.194944       1 proxier.go:826] syncProxyRules took 39.986661ms\nI0802 09:28:09.195282       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:09.232686       1 proxier.go:826] syncProxyRules took 37.566891ms\nI0802 09:28:09.731629       1 service.go:275] Service provisioning-4508-9194/csi-hostpathplugin updated: 1 ports\nI0802 09:28:10.117503       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-provisioner updated: 1 ports\nI0802 09:28:10.234101       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-provisioner:dummy\" at 100.68.168.136:12345/TCP\nI0802 09:28:10.234194       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpathplugin:dummy\" at 100.65.224.153:12345/TCP\nI0802 09:28:10.234293       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:10.261430       1 proxier.go:826] syncProxyRules took 27.575963ms\nI0802 09:28:10.503326       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-resizer updated: 1 ports\nI0802 09:28:10.890237       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:28:11.171114       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-resizer:dummy\" at 100.64.131.120:12345/TCP\nI0802 09:28:11.171597       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-snapshotter:dummy\" at 100.71.16.190:12345/TCP\nI0802 09:28:11.171821       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:11.209148       1 proxier.go:826] syncProxyRules took 38.234906ms\nI0802 09:28:12.363270       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:12.404311       1 proxier.go:826] syncProxyRules took 41.239569ms\nI0802 09:28:12.685167       1 service.go:275] Service volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4 updated: 1 ports\nI0802 09:28:13.404596       1 service.go:390] Adding new service port \"volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4\" at 100.66.79.76:1/TCP\nI0802 09:28:13.404795       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:13.433244       1 proxier.go:826] syncProxyRules took 28.813209ms\nI0802 09:28:13.792225       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-attacher updated: 0 ports\nI0802 09:28:14.254658       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-attacher:dummy\"\nI0802 09:28:14.254885       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:14.280375       1 proxier.go:826] syncProxyRules took 25.846827ms\nI0802 09:28:14.389953       1 service.go:275] Service provisioning-2971-8427/csi-hostpathplugin updated: 0 ports\nI0802 09:28:14.782481       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-provisioner updated: 0 ports\nI0802 09:28:15.174440       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-resizer updated: 0 ports\nI0802 09:28:15.174800       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpathplugin:dummy\"\nI0802 09:28:15.174905       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-provisioner:dummy\"\nI0802 09:28:15.174956       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-resizer:dummy\"\nI0802 09:28:15.175067       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:15.205847       1 proxier.go:826] syncProxyRules took 31.227776ms\nI0802 09:28:15.572138       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:28:16.049154       1 service.go:275] Service volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4 updated: 0 ports\nI0802 09:28:16.206084       1 service.go:415] Removing service port \"volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4\"\nI0802 09:28:16.206112       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-snapshotter:dummy\"\nI0802 09:28:16.206307       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:16.232098       1 proxier.go:826] syncProxyRules took 26.187252ms\nI0802 09:28:19.377819       1 service.go:275] Service services-878/service-proxy-toggled updated: 1 ports\nI0802 09:28:19.378314       1 service.go:390] Adding new service port \"services-878/service-proxy-toggled\" at 100.69.249.190:80/TCP\nI0802 09:28:19.378372       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:19.406012       1 proxier.go:826] syncProxyRules took 27.844715ms\nI0802 09:28:19.406206       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:19.430474       1 proxier.go:826] syncProxyRules took 24.432894ms\nI0802 09:28:19.810518       1 service.go:275] Service webhook-3845/e2e-test-webhook updated: 1 ports\nI0802 09:28:20.430762       1 service.go:390] Adding new service port \"webhook-3845/e2e-test-webhook\" at 100.65.164.170:8443/TCP\nI0802 09:28:20.430856       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:20.455696       1 proxier.go:826] syncProxyRules took 25.111508ms\nI0802 09:28:21.456046       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:21.483925       1 proxier.go:826] syncProxyRules took 28.116295ms\nI0802 09:28:25.976069       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:26.007475       1 proxier.go:826] syncProxyRules took 31.701824ms\nI0802 09:28:37.456489       1 service.go:275] Service webhook-3845/e2e-test-webhook updated: 0 ports\nI0802 09:28:37.457140       1 service.go:415] Removing service port \"webhook-3845/e2e-test-webhook\"\nI0802 09:28:37.457205       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:37.496912       1 proxier.go:826] syncProxyRules took 40.38584ms\nI0802 09:28:37.497677       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:37.538609       1 proxier.go:826] syncProxyRules took 41.13469ms\nI0802 09:28:40.100621       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-attacher updated: 0 ports\nI0802 09:28:40.101169       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-attacher:dummy\"\nI0802 09:28:40.101744       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:40.169513       1 proxier.go:826] syncProxyRules took 68.515722ms\nI0802 09:28:40.169839       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:40.215890       1 proxier.go:826] syncProxyRules took 46.233098ms\nI0802 09:28:40.285525       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-attacher updated: 1 ports\nI0802 09:28:40.688317       1 service.go:275] Service provisioning-4508-9194/csi-hostpathplugin updated: 0 ports\nI0802 09:28:40.860029       1 service.go:275] Service volume-expand-5280-1128/csi-hostpathplugin updated: 1 ports\nI0802 09:28:41.092223       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-provisioner updated: 0 ports\nI0802 09:28:41.106808       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-attacher:dummy\" at 100.68.184.116:12345/TCP\nI0802 09:28:41.106998       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpathplugin:dummy\"\nI0802 09:28:41.107085       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpathplugin:dummy\" at 100.69.248.233:12345/TCP\nI0802 09:28:41.107156       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-provisioner:dummy\"\nI0802 09:28:41.107305       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:41.148937       1 proxier.go:826] syncProxyRules took 42.307353ms\nI0802 09:28:41.261124       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-provisioner updated: 1 ports\nI0802 09:28:41.487229       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-resizer updated: 0 ports\nI0802 09:28:41.746177       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-resizer updated: 1 ports\nI0802 09:28:41.953121       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:28:42.137338       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:28:42.137693       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-resizer:dummy\" at 100.66.253.214:12345/TCP\nI0802 09:28:42.137821       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-snapshotter:dummy\"\nI0802 09:28:42.137890       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-snapshotter:dummy\" at 100.65.54.148:12345/TCP\nI0802 09:28:42.137947       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-provisioner:dummy\" at 100.70.52.135:12345/TCP\nI0802 09:28:42.138023       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-resizer:dummy\"\nI0802 09:28:42.138200       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:42.206590       1 proxier.go:826] syncProxyRules took 69.07753ms\nI0802 09:28:43.206903       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:43.233239       1 proxier.go:826] syncProxyRules took 26.524212ms\nI0802 09:28:45.717304       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:45.757455       1 proxier.go:826] syncProxyRules took 40.353329ms\nI0802 09:28:47.118563       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:47.156053       1 proxier.go:826] syncProxyRules took 37.656832ms\nI0802 09:28:49.114493       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:49.143892       1 proxier.go:826] syncProxyRules took 29.606895ms\nI0802 09:28:49.513018       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:49.545189       1 proxier.go:826] syncProxyRules took 32.429309ms\nI0802 09:28:50.115159       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:50.141268       1 proxier.go:826] syncProxyRules took 26.401334ms\nI0802 09:28:50.401943       1 service.go:275] Service services-878/service-proxy-toggled updated: 0 ports\nI0802 09:28:51.141516       1 service.go:415] Removing service port \"services-878/service-proxy-toggled\"\nI0802 09:28:51.141759       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:51.168227       1 proxier.go:826] syncProxyRules took 26.873041ms\nI0802 09:28:57.610540       1 service.go:275] Service services-878/service-proxy-toggled updated: 1 ports\nI0802 09:28:57.610890       1 service.go:390] Adding new service port \"services-878/service-proxy-toggled\" at 100.69.249.190:80/TCP\nI0802 09:28:57.611025       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:57.646054       1 proxier.go:826] syncProxyRules took 35.378419ms\nI0802 09:28:57.646364       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:57.670887       1 proxier.go:826] syncProxyRules took 24.796688ms\nI0802 09:29:10.459878       1 service.go:275] Service webhook-6678/e2e-test-webhook updated: 1 ports\nI0802 09:29:10.460187       1 service.go:390] Adding new service port \"webhook-6678/e2e-test-webhook\" at 100.67.74.125:8443/TCP\nI0802 09:29:10.460348       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:10.488941       1 proxier.go:826] syncProxyRules took 28.930375ms\nI0802 09:29:10.489237       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:10.516017       1 proxier.go:826] syncProxyRules took 27.048164ms\nI0802 09:29:13.208939       1 service.go:275] Service webhook-6678/e2e-test-webhook updated: 0 ports\nI0802 09:29:13.209296       1 service.go:415] Removing service port \"webhook-6678/e2e-test-webhook\"\nI0802 09:29:13.211691       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:13.250149       1 proxier.go:826] syncProxyRules took 41.032034ms\nI0802 09:29:13.767738       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:13.804256       1 proxier.go:826] syncProxyRules took 36.755319ms\nI0802 09:29:18.939627       1 service.go:275] Service services-6068/multi-endpoint-test updated: 2 ports\nI0802 09:29:18.940174       1 service.go:390] Adding new service port \"services-6068/multi-endpoint-test:portname1\" at 100.66.255.138:80/TCP\nI0802 09:29:18.940312       1 service.go:390] Adding new service port \"services-6068/multi-endpoint-test:portname2\" at 100.66.255.138:81/TCP\nI0802 09:29:18.940432       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:18.969228       1 proxier.go:826] syncProxyRules took 29.566714ms\nI0802 09:29:18.969483       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:18.995176       1 proxier.go:826] syncProxyRules took 25.921357ms\nI0802 09:29:19.941648       1 service.go:275] Service services-878/service-proxy-toggled updated: 0 ports\nI0802 09:29:19.942112       1 service.go:415] Removing service port \"services-878/service-proxy-toggled\"\nI0802 09:29:19.942203       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:19.981519       1 proxier.go:826] syncProxyRules took 39.834397ms\nI0802 09:29:20.982046       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:21.008964       1 proxier.go:826] syncProxyRules took 27.195705ms\nI0802 09:29:22.009396       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:22.034514       1 proxier.go:826] syncProxyRules took 25.339047ms\nI0802 09:29:23.741188       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:23.784203       1 proxier.go:826] syncProxyRules took 43.198745ms\nI0802 09:29:24.514045       1 service.go:275] Service dns-4443/test-service-2 updated: 1 ports\nI0802 09:29:24.514423       1 service.go:390] Adding new service port \"dns-4443/test-service-2:http\" at 100.71.195.231:80/TCP\nI0802 09:29:24.514536       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:24.545835       1 proxier.go:826] syncProxyRules took 31.691997ms\nI0802 09:29:25.192499       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:25.226031       1 proxier.go:826] syncProxyRules took 33.794709ms\nI0802 09:29:26.259238       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:26.303730       1 proxier.go:826] syncProxyRules took 44.715313ms\nI0802 09:29:27.014433       1 service.go:275] Service services-6068/multi-endpoint-test updated: 0 ports\nI0802 09:29:27.014793       1 service.go:415] Removing service port \"services-6068/multi-endpoint-test:portname1\"\nI0802 09:29:27.014818       1 service.go:415] Removing service port \"services-6068/multi-endpoint-test:portname2\"\nI0802 09:29:27.014942       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:27.049495       1 proxier.go:826] syncProxyRules took 35.027254ms\nI0802 09:29:28.049844       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:28.089623       1 proxier.go:826] syncProxyRules took 40.005119ms\nI0802 09:29:43.214122       1 service.go:275] Service provisioning-16-2189/csi-hostpath-attacher updated: 1 ports\nI0802 09:29:43.214441       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-attacher:dummy\" at 100.64.227.251:12345/TCP\nI0802 09:29:43.214597       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:43.245921       1 proxier.go:826] syncProxyRules took 31.6544ms\nI0802 09:29:43.246158       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:43.283959       1 proxier.go:826] syncProxyRules took 38.008475ms\nI0802 09:29:43.793211       1 service.go:275] Service provisioning-16-2189/csi-hostpathplugin updated: 1 ports\nI0802 09:29:44.180573       1 service.go:275] Service provisioning-16-2189/csi-hostpath-provisioner updated: 1 ports\nI0802 09:29:44.284335       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpathplugin:dummy\" at 100.68.142.127:12345/TCP\nI0802 09:29:44.284368       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-provisioner:dummy\" at 100.67.83.87:12345/TCP\nI0802 09:29:44.284437       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:44.312902       1 proxier.go:826] syncProxyRules took 28.75916ms\nI0802 09:29:44.567861       1 service.go:275] Service provisioning-16-2189/csi-hostpath-resizer updated: 1 ports\nI0802 09:29:44.956412       1 service.go:275] Service provisioning-16-2189/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:29:45.313166       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-resizer:dummy\" at 100.64.33.107:12345/TCP\nI0802 09:29:45.313191       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-snapshotter:dummy\" at 100.67.177.99:12345/TCP\nI0802 09:29:45.313363       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:45.340206       1 proxier.go:826] syncProxyRules took 27.197124ms\nI0802 09:29:46.924123       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-attacher updated: 0 ports\nI0802 09:29:46.924370       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-attacher:dummy\"\nI0802 09:29:46.924436       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:46.957412       1 proxier.go:826] syncProxyRules took 33.256945ms\nI0802 09:29:47.507944       1 service.go:275] Service volume-expand-5280-1128/csi-hostpathplugin updated: 0 ports\nI0802 09:29:47.508539       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpathplugin:dummy\"\nI0802 09:29:47.509231       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:47.540630       1 proxier.go:826] syncProxyRules took 32.548558ms\nI0802 09:29:47.914160       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-provisioner updated: 0 ports\nI0802 09:29:48.311897       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-resizer updated: 0 ports\nI0802 09:29:48.312193       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-provisioner:dummy\"\nI0802 09:29:48.312298       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-resizer:dummy\"\nI0802 09:29:48.312433       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:48.345466       1 proxier.go:826] syncProxyRules took 33.454013ms\nI0802 09:29:48.706637       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:29:49.345820       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-snapshotter:dummy\"\nI0802 09:29:49.346005       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:49.386841       1 proxier.go:826] syncProxyRules took 41.188561ms\nI0802 09:29:50.581245       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:50.613876       1 proxier.go:826] syncProxyRules took 32.896671ms\nI0802 09:29:51.614502       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:51.653106       1 proxier.go:826] syncProxyRules took 38.860448ms\nI0802 09:30:05.295447       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:05.346161       1 proxier.go:826] syncProxyRules took 50.884354ms\nI0802 09:30:05.488906       1 service.go:275] Service dns-4443/test-service-2 updated: 0 ports\nI0802 09:30:05.489114       1 service.go:415] Removing service port \"dns-4443/test-service-2:http\"\nI0802 09:30:05.489179       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:05.523609       1 proxier.go:826] syncProxyRules took 34.669123ms\nI0802 09:30:06.523906       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:06.562289       1 proxier.go:826] syncProxyRules took 38.594178ms\nI0802 09:30:10.256663       1 service.go:275] Service volumemode-685-3249/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:10.257108       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-attacher:dummy\" at 100.71.70.1:12345/TCP\nI0802 09:30:10.257264       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:10.286419       1 proxier.go:826] syncProxyRules took 29.654556ms\nI0802 09:30:10.286653       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:10.309746       1 proxier.go:826] syncProxyRules took 23.302598ms\nI0802 09:30:10.839683       1 service.go:275] Service volumemode-685-3249/csi-hostpathplugin updated: 1 ports\nI0802 09:30:11.232100       1 service.go:275] Service volumemode-685-3249/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:11.309965       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpathplugin:dummy\" at 100.64.160.4:12345/TCP\nI0802 09:30:11.310009       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-provisioner:dummy\" at 100.64.110.51:12345/TCP\nI0802 09:30:11.310158       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:11.337633       1 proxier.go:826] syncProxyRules took 27.818112ms\nI0802 09:30:11.623609       1 service.go:275] Service volumemode-685-3249/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:12.013341       1 service.go:275] Service volumemode-685-3249/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:12.338471       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-resizer:dummy\" at 100.68.58.197:12345/TCP\nI0802 09:30:12.338580       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-snapshotter:dummy\" at 100.67.184.34:12345/TCP\nI0802 09:30:12.339029       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:12.371155       1 proxier.go:826] syncProxyRules took 32.89187ms\nI0802 09:30:14.198561       1 service.go:275] Service provisioning-16-2189/csi-hostpath-attacher updated: 0 ports\nI0802 09:30:14.199151       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-attacher:dummy\"\nI0802 09:30:14.199321       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:14.234781       1 proxier.go:826] syncProxyRules took 35.810576ms\nI0802 09:30:14.661649       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:14.687860       1 proxier.go:826] syncProxyRules took 26.433053ms\nI0802 09:30:14.800669       1 service.go:275] Service provisioning-16-2189/csi-hostpathplugin updated: 0 ports\nI0802 09:30:15.194020       1 service.go:275] Service provisioning-16-2189/csi-hostpath-provisioner updated: 0 ports\nI0802 09:30:15.589202       1 service.go:275] Service provisioning-16-2189/csi-hostpath-resizer updated: 0 ports\nI0802 09:30:15.589609       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-provisioner:dummy\"\nI0802 09:30:15.589696       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-resizer:dummy\"\nI0802 09:30:15.589761       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpathplugin:dummy\"\nI0802 09:30:15.589926       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:15.623099       1 proxier.go:826] syncProxyRules took 33.70006ms\nI0802 09:30:15.989664       1 service.go:275] Service provisioning-16-2189/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:30:16.380956       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-snapshotter:dummy\"\nI0802 09:30:16.381106       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:16.407708       1 proxier.go:826] syncProxyRules took 26.891318ms\nI0802 09:30:17.379586       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:17.407227       1 proxier.go:826] syncProxyRules took 27.862228ms\nI0802 09:30:18.408811       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:18.438600       1 proxier.go:826] syncProxyRules took 31.142167ms\nI0802 09:30:24.695973       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:24.696356       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-attacher:dummy\" at 100.69.156.225:12345/TCP\nI0802 09:30:24.696499       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:24.731856       1 proxier.go:826] syncProxyRules took 35.70065ms\nI0802 09:30:24.732287       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:24.782405       1 proxier.go:826] syncProxyRules took 50.515793ms\nI0802 09:30:25.273010       1 service.go:275] Service volumemode-5208-5908/csi-hostpathplugin updated: 1 ports\nI0802 09:30:25.659866       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:25.782712       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpathplugin:dummy\" at 100.71.116.224:12345/TCP\nI0802 09:30:25.782739       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-provisioner:dummy\" at 100.71.179.180:12345/TCP\nI0802 09:30:25.782897       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:25.810077       1 proxier.go:826] syncProxyRules took 27.555006ms\nI0802 09:30:26.050150       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:26.431638       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:26.810376       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-resizer:dummy\" at 100.70.139.57:12345/TCP\nI0802 09:30:26.810403       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-snapshotter:dummy\" at 100.65.43.24:12345/TCP\nI0802 09:30:26.810592       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:26.844501       1 proxier.go:826] syncProxyRules took 34.308153ms\nI0802 09:30:29.913942       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:29.914559       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-attacher:dummy\" at 100.65.85.203:12345/TCP\nI0802 09:30:29.914703       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:29.968224       1 proxier.go:826] syncProxyRules took 53.842229ms\nI0802 09:30:29.968619       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:30.011847       1 proxier.go:826] syncProxyRules took 43.427937ms\nI0802 09:30:30.498662       1 service.go:275] Service ephemeral-2872-6578/csi-hostpathplugin updated: 1 ports\nI0802 09:30:30.885105       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:31.012382       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpathplugin:dummy\" at 100.67.116.240:12345/TCP\nI0802 09:30:31.012535       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-provisioner:dummy\" at 100.68.12.225:12345/TCP\nI0802 09:30:31.012685       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:31.065491       1 proxier.go:826] syncProxyRules took 53.309705ms\nI0802 09:30:31.277065       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:31.668056       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:32.066183       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-resizer:dummy\" at 100.67.86.111:12345/TCP\nI0802 09:30:32.066211       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-snapshotter:dummy\" at 100.65.59.49:12345/TCP\nI0802 09:30:32.066343       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:32.101042       1 proxier.go:826] syncProxyRules took 35.018874ms\nI0802 09:30:33.973556       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:34.001724       1 proxier.go:826] syncProxyRules took 28.392198ms\nI0802 09:30:34.373734       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:34.399654       1 proxier.go:826] syncProxyRules took 26.162643ms\nI0802 09:30:35.569484       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:35.595958       1 proxier.go:826] syncProxyRules took 26.713849ms\nI0802 09:30:35.974219       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:36.006660       1 proxier.go:826] syncProxyRules took 32.658624ms\nI0802 09:30:38.376646       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:38.406472       1 proxier.go:826] syncProxyRules took 30.065139ms\nI0802 09:30:38.779462       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:38.820325       1 proxier.go:826] syncProxyRules took 41.132744ms\nI0802 09:30:39.579152       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:39.605401       1 proxier.go:826] syncProxyRules took 26.448948ms\nI0802 09:30:41.375713       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:41.405665       1 proxier.go:826] syncProxyRules took 30.188115ms\nI0802 09:30:41.779419       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:41.806509       1 proxier.go:826] syncProxyRules took 27.334317ms\nI0802 09:30:44.964029       1 service.go:275] Service kubectl-8312/agnhost-primary updated: 1 ports\nI0802 09:30:44.964445       1 service.go:390] Adding new service port \"kubectl-8312/agnhost-primary\" at 100.69.109.236:6379/TCP\nI0802 09:30:44.964662       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:44.999063       1 proxier.go:826] syncProxyRules took 34.820728ms\nI0802 09:30:44.999316       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:45.026481       1 proxier.go:826] syncProxyRules took 27.388364ms\nI0802 09:30:46.026814       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:46.053495       1 proxier.go:826] syncProxyRules took 26.89914ms\nI0802 09:30:46.704760       1 service.go:275] Service volumemode-685-3249/csi-hostpath-attacher updated: 0 ports\nI0802 09:30:47.053809       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-attacher:dummy\"\nI0802 09:30:47.054113       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:47.085268       1 proxier.go:826] syncProxyRules took 31.652719ms\nI0802 09:30:47.294869       1 service.go:275] Service volumemode-685-3249/csi-hostpathplugin updated: 0 ports\nI0802 09:30:47.695906       1 service.go:275] Service volumemode-685-3249/csi-hostpath-provisioner updated: 0 ports\nI0802 09:30:48.085578       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpathplugin:dummy\"\nI0802 09:30:48.085768       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-provisioner:dummy\"\nI0802 09:30:48.085997       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:48.095022       1 service.go:275] Service volumemode-685-3249/csi-hostpath-resizer updated: 0 ports\nI0802 09:30:48.121104       1 proxier.go:826] syncProxyRules took 35.727355ms\nI0802 09:30:48.497064       1 service.go:275] Service volumemode-685-3249/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:30:49.121405       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-resizer:dummy\"\nI0802 09:30:49.121591       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-snapshotter:dummy\"\nI0802 09:30:49.121799       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:49.148649       1 proxier.go:826] syncProxyRules took 27.429043ms\nI0802 09:30:58.833210       1 service.go:275] Service kubectl-8312/agnhost-primary updated: 0 ports\nI0802 09:30:58.833626       1 service.go:415] Removing service port \"kubectl-8312/agnhost-primary\"\nI0802 09:30:58.833781       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:58.879060       1 proxier.go:826] syncProxyRules took 45.818394ms\nI0802 09:30:58.879626       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:58.924493       1 proxier.go:826] syncProxyRules took 45.063199ms\nI0802 09:31:07.360121       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-attacher updated: 0 ports\nI0802 09:31:07.361471       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-attacher:dummy\"\nI0802 09:31:07.361620       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:07.392386       1 proxier.go:826] syncProxyRules took 31.093731ms\nI0802 09:31:07.392600       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:07.422953       1 proxier.go:826] syncProxyRules took 30.535491ms\nI0802 09:31:07.943344       1 service.go:275] Service volumemode-5208-5908/csi-hostpathplugin updated: 0 ports\nI0802 09:31:08.331868       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-provisioner updated: 0 ports\nI0802 09:31:08.423187       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpathplugin:dummy\"\nI0802 09:31:08.423215       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-provisioner:dummy\"\nI0802 09:31:08.423354       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:08.447488       1 proxier.go:826] syncProxyRules took 24.457416ms\nI0802 09:31:08.724154       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-resizer updated: 0 ports\nI0802 09:31:09.113005       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:31:09.447805       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-resizer:dummy\"\nI0802 09:31:09.447837       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-snapshotter:dummy\"\nI0802 09:31:09.448021       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:09.472208       1 proxier.go:826] syncProxyRules took 24.594817ms\nI0802 09:31:39.572848       1 service.go:275] Service services-2137/endpoint-test2 updated: 1 ports\nI0802 09:31:39.573202       1 service.go:390] Adding new service port \"services-2137/endpoint-test2\" at 100.65.50.254:80/TCP\nI0802 09:31:39.573298       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:39.601691       1 proxier.go:826] syncProxyRules took 28.666899ms\nI0802 09:31:39.601939       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:39.628092       1 proxier.go:826] syncProxyRules took 26.375443ms\nI0802 09:31:39.873224       1 service.go:275] Service services-295/tolerate-unready updated: 1 ports\nI0802 09:31:40.585411       1 service.go:390] Adding new service port \"services-295/tolerate-unready:http\" at 100.67.114.99:80/TCP\nI0802 09:31:40.585498       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:40.673877       1 proxier.go:826] syncProxyRules took 88.61062ms\nI0802 09:31:41.675060       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:41.714616       1 proxier.go:826] syncProxyRules took 39.833142ms\nI0802 09:31:48.986217       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:49.014190       1 proxier.go:826] syncProxyRules took 28.238234ms\nI0802 09:31:53.385873       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:53.415164       1 proxier.go:826] syncProxyRules took 29.492224ms\nI0802 09:31:54.883730       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:54.922173       1 proxier.go:826] syncProxyRules took 38.693013ms\nI0802 09:31:55.870552       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:55.943234       1 proxier.go:826] syncProxyRules took 72.918091ms\nI0802 09:31:55.944004       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:55.971182       1 proxier.go:826] syncProxyRules took 27.394924ms\nI0802 09:31:56.971537       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:57.007935       1 proxier.go:826] syncProxyRules took 36.59956ms\nI0802 09:31:57.639080       1 service.go:275] Service services-2137/endpoint-test2 updated: 0 ports\nI0802 09:31:58.008468       1 service.go:415] Removing service port \"services-2137/endpoint-test2\"\nI0802 09:31:58.008544       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:58.035335       1 proxier.go:826] syncProxyRules took 27.044751ms\nI0802 09:31:58.999485       1 service.go:275] Service services-295/tolerate-unready updated: 0 ports\nI0802 09:31:59.000219       1 service.go:415] Removing service port \"services-295/tolerate-unready:http\"\nI0802 09:31:59.001060       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:59.031807       1 proxier.go:826] syncProxyRules took 32.278964ms\nI0802 09:32:00.032189       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:00.071124       1 proxier.go:826] syncProxyRules took 39.194254ms\nI0802 09:32:00.585264       1 service.go:275] Service services-9578/hairpin-test updated: 1 ports\nI0802 09:32:01.071420       1 service.go:390] Adding new service port \"services-9578/hairpin-test\" at 100.71.179.245:8080/TCP\nI0802 09:32:01.071524       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:01.096574       1 proxier.go:826] syncProxyRules took 25.336331ms\nI0802 09:32:01.995432       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:02.048536       1 proxier.go:826] syncProxyRules took 53.397291ms\nI0802 09:32:12.440649       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-attacher updated: 0 ports\nI0802 09:32:12.440965       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-attacher:dummy\"\nI0802 09:32:12.441459       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:12.475851       1 proxier.go:826] syncProxyRules took 35.098672ms\nI0802 09:32:12.629566       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-attacher updated: 1 ports\nI0802 09:32:12.630050       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-attacher:dummy\" at 100.68.103.208:12345/TCP\nI0802 09:32:12.630231       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:12.661149       1 proxier.go:826] syncProxyRules took 31.547817ms\nI0802 09:32:13.043063       1 service.go:275] Service ephemeral-2872-6578/csi-hostpathplugin updated: 0 ports\nI0802 09:32:13.207057       1 service.go:275] Service volume-expand-8616-827/csi-hostpathplugin updated: 1 ports\nI0802 09:32:13.436037       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-provisioner updated: 0 ports\nI0802 09:32:13.447738       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpathplugin:dummy\"\nI0802 09:32:13.447773       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpathplugin:dummy\" at 100.66.147.83:12345/TCP\nI0802 09:32:13.447781       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-provisioner:dummy\"\nI0802 09:32:13.447919       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:13.474481       1 proxier.go:826] syncProxyRules took 26.959442ms\nI0802 09:32:13.595656       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-provisioner updated: 1 ports\nI0802 09:32:13.831012       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-resizer updated: 0 ports\nI0802 09:32:13.982555       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-resizer updated: 1 ports\nI0802 09:32:14.243258       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:32:14.375380       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:32:14.474807       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-provisioner:dummy\" at 100.69.16.155:12345/TCP\nI0802 09:32:14.474833       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-resizer:dummy\"\nI0802 09:32:14.474845       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-resizer:dummy\" at 100.65.68.228:12345/TCP\nI0802 09:32:14.474853       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-snapshotter:dummy\"\nI0802 09:32:14.474864       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-snapshotter:dummy\" at 100.65.206.78:12345/TCP\nI0802 09:32:14.474946       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:14.508365       1 proxier.go:826] syncProxyRules took 33.765327ms\nI0802 09:32:14.549055       1 service.go:275] Service services-9578/hairpin-test updated: 0 ports\nI0802 09:32:15.509088       1 service.go:415] Removing service port \"services-9578/hairpin-test\"\nI0802 09:32:15.509260       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:15.535038       1 proxier.go:826] syncProxyRules took 26.146394ms\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-43-68.ap-southeast-2.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-47-13.ap-southeast-2.compute.internal ====\nI0802 09:25:45.846655       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0802 09:25:45.847821       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0802 09:25:45.847842       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0802 09:25:45.847867       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0802 09:25:45.847897       1 flags.go:59] FLAG: --cleanup=\"false\"\nI0802 09:25:45.847905       1 flags.go:59] FLAG: --cleanup-ipvs=\"true\"\nI0802 09:25:45.847909       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0802 09:25:45.847929       1 flags.go:59] FLAG: --config=\"\"\nI0802 09:25:45.847934       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0802 09:25:45.847940       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0802 09:25:45.847947       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0802 09:25:45.847952       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0802 09:25:45.847957       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0802 09:25:45.848061       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0802 09:25:45.848069       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0802 09:25:45.848077       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0802 09:25:45.848094       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0802 09:25:45.848099       1 flags.go:59] FLAG: --help=\"false\"\nI0802 09:25:45.848104       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-47-13.ap-southeast-2.compute.internal\"\nI0802 09:25:45.848111       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0802 09:25:45.848189       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0802 09:25:45.848195       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0802 09:25:45.848200       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0802 09:25:45.848211       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0802 09:25:45.848216       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0802 09:25:45.848231       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0802 09:25:45.848236       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0802 09:25:45.848302       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0802 09:25:45.848314       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0802 09:25:45.848319       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0802 09:25:45.848342       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0802 09:25:45.848349       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0802 09:25:45.848364       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0802 09:25:45.848372       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0802 09:25:45.848378       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0802 09:25:45.848387       1 flags.go:59] FLAG: --log-dir=\"\"\nI0802 09:25:45.848393       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0802 09:25:45.848399       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0802 09:25:45.848424       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0802 09:25:45.848430       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0802 09:25:45.848436       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0802 09:25:45.848441       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io\"\nI0802 09:25:45.848448       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0802 09:25:45.848454       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0802 09:25:45.848459       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0802 09:25:45.848466       1 flags.go:59] FLAG: --one-output=\"false\"\nI0802 09:25:45.848471       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0802 09:25:45.848476       1 flags.go:59] FLAG: --profiling=\"false\"\nI0802 09:25:45.848509       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0802 09:25:45.848517       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0802 09:25:45.848524       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0802 09:25:45.848529       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0802 09:25:45.848534       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0802 09:25:45.848539       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0802 09:25:45.848544       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0802 09:25:45.848550       1 flags.go:59] FLAG: --v=\"2\"\nI0802 09:25:45.848555       1 flags.go:59] FLAG: --version=\"false\"\nI0802 09:25:45.848586       1 flags.go:59] FLAG: --vmodule=\"\"\nI0802 09:25:45.848593       1 flags.go:59] FLAG: --write-config-to=\"\"\nW0802 09:25:45.849489       1 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0802 09:25:45.849641       1 feature_gate.go:243] feature gates: &{map[]}\nI0802 09:25:45.849794       1 feature_gate.go:243] feature gates: &{map[]}\nI0802 09:25:45.960544       1 node.go:172] Successfully retrieved node IP: 172.20.47.13\nI0802 09:25:45.960612       1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.20.47.13), assume IPv4 operation\nW0802 09:25:46.025868       1 server_others.go:584] Unknown proxy mode \"\", assuming iptables proxy\nI0802 09:25:46.026129       1 server_others.go:182] DetectLocalMode: 'ClusterCIDR'\nI0802 09:25:46.026150       1 server_others.go:185] Using iptables Proxier.\nI0802 09:25:46.026233       1 utils.go:321] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0802 09:25:46.026289       1 proxier.go:287] iptables(IPv4) masquerade mark: 0x00004000\nI0802 09:25:46.026341       1 proxier.go:334] iptables(IPv4) sync params: minSyncPeriod=1s, syncPeriod=30s, burstSyncs=2\nI0802 09:25:46.026413       1 proxier.go:346] iptables(IPv4) supports --random-fully\nI0802 09:25:46.027950       1 server.go:650] Version: v1.20.9\nI0802 09:25:46.029621       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI0802 09:25:46.029822       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0802 09:25:46.030046       1 mount_linux.go:188] Detected OS without systemd\nI0802 09:25:46.032672       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI0802 09:25:46.036980       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0802 09:25:46.037047       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0802 09:25:46.039112       1 config.go:315] Starting service config controller\nI0802 09:25:46.039755       1 shared_informer.go:240] Waiting for caches to sync for service config\nI0802 09:25:46.039805       1 config.go:224] Starting endpoint slice config controller\nI0802 09:25:46.039967       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0802 09:25:46.039989       1 reflector.go:219] Starting reflector *v1.Service (15m0s) from k8s.io/client-go/informers/factory.go:134\nI0802 09:25:46.040126       1 reflector.go:219] Starting reflector *v1beta1.EndpointSlice (15m0s) from k8s.io/client-go/informers/factory.go:134\nI0802 09:25:46.045339       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 1 ports\nI0802 09:25:46.045602       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-resizer updated: 1 ports\nI0802 09:25:46.045624       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:25:46.045641       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-provisioner updated: 1 ports\nI0802 09:25:46.045659       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-resizer updated: 1 ports\nI0802 09:25:46.045686       1 service.go:275] Service default/kubernetes updated: 1 ports\nI0802 09:25:46.045725       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-provisioner updated: 1 ports\nI0802 09:25:46.045737       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-attacher updated: 1 ports\nI0802 09:25:46.045785       1 service.go:275] Service kube-system/kube-dns updated: 3 ports\nI0802 09:25:46.045808       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-attacher updated: 1 ports\nI0802 09:25:46.045822       1 service.go:275] Service ephemeral-9710-9555/csi-hostpathplugin updated: 1 ports\nI0802 09:25:46.045908       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:25:46.045934       1 service.go:275] Service volume-expand-9751-6514/csi-hostpathplugin updated: 1 ports\nI0802 09:25:46.139948       1 shared_informer.go:247] Caches are synced for service config \nI0802 09:25:46.140069       1 proxier.go:818] Not syncing iptables until Services and Endpoints have been received from master\nI0802 09:25:46.140109       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0802 09:25:46.140388       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-attacher:dummy\" at 100.67.215.140:12345/TCP\nI0802 09:25:46.140421       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-attacher:dummy\" at 100.67.115.219:12345/TCP\nI0802 09:25:46.140487       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpathplugin:dummy\" at 100.70.173.208:12345/TCP\nI0802 09:25:46.140506       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpathplugin:dummy\" at 100.71.58.236:12345/TCP\nI0802 09:25:46.140520       1 service.go:390] Adding new service port \"deployment-4314/test-rolling-update-with-lb\" at 100.71.104.248:80/TCP\nI0802 09:25:46.140532       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-snapshotter:dummy\" at 100.69.208.138:12345/TCP\nI0802 09:25:46.140585       1 service.go:390] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0802 09:25:46.140647       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-provisioner:dummy\" at 100.64.220.139:12345/TCP\nI0802 09:25:46.140723       1 service.go:390] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0802 09:25:46.140747       1 service.go:390] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0802 09:25:46.140776       1 service.go:390] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0802 09:25:46.140827       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-snapshotter:dummy\" at 100.71.86.20:12345/TCP\nI0802 09:25:46.140847       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-resizer:dummy\" at 100.65.129.225:12345/TCP\nI0802 09:25:46.140920       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-provisioner:dummy\" at 100.67.64.219:12345/TCP\nI0802 09:25:46.140942       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-resizer:dummy\" at 100.65.122.98:12345/TCP\nI0802 09:25:46.141209       1 proxier.go:858] Stale udp service kube-system/kube-dns:dns -> 100.64.0.10\nI0802 09:25:46.141241       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:46.171783       1 proxier.go:1715] Opened local port \"nodePort for deployment-4314/test-rolling-update-with-lb\" (:31127/tcp)\nI0802 09:25:46.187210       1 service_health.go:98] Opening healthcheck \"deployment-4314/test-rolling-update-with-lb\" on port 31777\nI0802 09:25:46.190837       1 proxier.go:826] syncProxyRules took 50.651107ms\nI0802 09:25:53.817079       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:53.844872       1 proxier.go:826] syncProxyRules took 28.301423ms\nI0802 09:25:53.845579       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:53.874858       1 proxier.go:826] syncProxyRules took 29.926562ms\nI0802 09:25:55.130096       1 service.go:275] Service services-1470/up-down-1 updated: 1 ports\nI0802 09:25:55.130522       1 service.go:390] Adding new service port \"services-1470/up-down-1\" at 100.70.4.215:80/TCP\nI0802 09:25:55.130578       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:55.172345       1 proxier.go:826] syncProxyRules took 42.210272ms\nI0802 09:25:56.173064       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:56.218996       1 proxier.go:826] syncProxyRules took 46.518668ms\nI0802 09:25:56.734737       1 service.go:275] Service kubectl-4027/agnhost-replica updated: 1 ports\nI0802 09:25:57.219599       1 service.go:390] Adding new service port \"kubectl-4027/agnhost-replica\" at 100.71.189.13:6379/TCP\nI0802 09:25:57.219675       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:57.248077       1 proxier.go:826] syncProxyRules took 28.957053ms\nI0802 09:25:58.248931       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:58.297744       1 proxier.go:826] syncProxyRules took 49.493602ms\nI0802 09:25:58.524593       1 service.go:275] Service kubectl-4027/agnhost-primary updated: 1 ports\nI0802 09:25:59.276849       1 service.go:390] Adding new service port \"kubectl-4027/agnhost-primary\" at 100.66.15.222:6379/TCP\nI0802 09:25:59.276953       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:59.343598       1 proxier.go:826] syncProxyRules took 67.429588ms\nI0802 09:25:59.861933       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:59.904810       1 proxier.go:826] syncProxyRules took 43.493231ms\nI0802 09:26:00.327269       1 service.go:275] Service kubectl-4027/frontend updated: 1 ports\nI0802 09:26:00.905429       1 service.go:390] Adding new service port \"kubectl-4027/frontend\" at 100.67.213.81:80/TCP\nI0802 09:26:00.905515       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:00.944243       1 proxier.go:826] syncProxyRules took 39.298575ms\nI0802 09:26:01.261832       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-attacher updated: 1 ports\nI0802 09:26:01.854559       1 service.go:275] Service volume-expand-5125-7028/csi-hostpathplugin updated: 1 ports\nI0802 09:26:01.855282       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpathplugin:dummy\" at 100.68.226.200:12345/TCP\nI0802 09:26:01.855403       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-attacher:dummy\" at 100.68.29.179:12345/TCP\nI0802 09:26:01.855450       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:01.886530       1 proxier.go:826] syncProxyRules took 31.694965ms\nI0802 09:26:02.248099       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-provisioner updated: 1 ports\nI0802 09:26:02.641831       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-resizer updated: 1 ports\nI0802 09:26:02.891194       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-provisioner:dummy\" at 100.66.131.101:12345/TCP\nI0802 09:26:02.891732       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-resizer:dummy\" at 100.66.76.190:12345/TCP\nI0802 09:26:02.891864       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:03.011787       1 proxier.go:826] syncProxyRules took 123.738936ms\nI0802 09:26:03.032153       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:26:04.012391       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-snapshotter:dummy\" at 100.69.27.176:12345/TCP\nI0802 09:26:04.012488       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:04.040333       1 proxier.go:826] syncProxyRules took 28.405931ms\nI0802 09:26:04.949478       1 service.go:275] Service services-1470/up-down-2 updated: 1 ports\nI0802 09:26:04.950522       1 service.go:390] Adding new service port \"services-1470/up-down-2\" at 100.70.71.58:80/TCP\nI0802 09:26:04.950623       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:04.997284       1 proxier.go:826] syncProxyRules took 47.753482ms\nI0802 09:26:05.891188       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:05.936368       1 proxier.go:826] syncProxyRules took 45.789102ms\nI0802 09:26:05.969324       1 service.go:275] Service webhook-9862/e2e-test-webhook updated: 1 ports\nI0802 09:26:06.885064       1 service.go:390] Adding new service port \"webhook-9862/e2e-test-webhook\" at 100.66.227.222:8443/TCP\nI0802 09:26:06.885186       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:06.942159       1 proxier.go:826] syncProxyRules took 57.712433ms\nI0802 09:26:07.942931       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:07.972130       1 proxier.go:826] syncProxyRules took 29.77293ms\nI0802 09:26:08.972833       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:09.029071       1 proxier.go:826] syncProxyRules took 56.79811ms\nI0802 09:26:10.029784       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:10.063646       1 proxier.go:826] syncProxyRules took 34.440564ms\nI0802 09:26:10.880636       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:10.915687       1 proxier.go:826] syncProxyRules took 35.735695ms\nI0802 09:26:11.313200       1 service.go:275] Service webhook-9862/e2e-test-webhook updated: 0 ports\nI0802 09:26:11.853416       1 service.go:415] Removing service port \"webhook-9862/e2e-test-webhook\"\nI0802 09:26:11.853580       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:11.896420       1 proxier.go:826] syncProxyRules took 43.698738ms\nI0802 09:26:12.897160       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:12.927322       1 proxier.go:826] syncProxyRules took 30.766063ms\nI0802 09:26:13.854505       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:13.899292       1 proxier.go:826] syncProxyRules took 45.549212ms\nI0802 09:26:14.900350       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:14.943591       1 proxier.go:826] syncProxyRules took 44.158011ms\nI0802 09:26:14.982576       1 service.go:275] Service kubectl-4027/agnhost-replica updated: 0 ports\nI0802 09:26:15.730413       1 service.go:275] Service services-5870/service-headless-toggled updated: 1 ports\nI0802 09:26:15.867811       1 service.go:275] Service kubectl-4027/agnhost-primary updated: 0 ports\nI0802 09:26:15.868446       1 service.go:415] Removing service port \"kubectl-4027/agnhost-primary\"\nI0802 09:26:15.868470       1 service.go:415] Removing service port \"kubectl-4027/agnhost-replica\"\nI0802 09:26:15.868485       1 service.go:390] Adding new service port \"services-5870/service-headless-toggled\" at 100.66.82.148:80/TCP\nI0802 09:26:15.868649       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:15.899467       1 proxier.go:826] syncProxyRules took 31.602067ms\nI0802 09:26:16.789642       1 service.go:275] Service kubectl-4027/frontend updated: 0 ports\nI0802 09:26:16.900170       1 service.go:415] Removing service port \"kubectl-4027/frontend\"\nI0802 09:26:16.900394       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:16.934408       1 proxier.go:826] syncProxyRules took 34.809323ms\nI0802 09:26:18.175624       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:18.210607       1 proxier.go:826] syncProxyRules took 35.493313ms\nI0802 09:26:18.893367       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:18.923365       1 proxier.go:826] syncProxyRules took 30.540544ms\nI0802 09:26:31.264169       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 0 ports\nI0802 09:26:31.264643       1 service.go:415] Removing service port \"deployment-4314/test-rolling-update-with-lb\"\nI0802 09:26:31.264719       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:31.308950       1 service_health.go:83] Closing healthcheck \"deployment-4314/test-rolling-update-with-lb\" on port 31777\nI0802 09:26:31.309023       1 proxier.go:826] syncProxyRules took 44.815336ms\nI0802 09:26:36.843962       1 service.go:275] Service services-5870/service-headless-toggled updated: 0 ports\nI0802 09:26:36.844540       1 service.go:415] Removing service port \"services-5870/service-headless-toggled\"\nI0802 09:26:36.844635       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:36.898970       1 proxier.go:826] syncProxyRules took 54.965297ms\nI0802 09:26:39.420582       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:39.484492       1 proxier.go:826] syncProxyRules took 64.572852ms\nI0802 09:26:40.422815       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:40.627843       1 proxier.go:826] syncProxyRules took 205.635757ms\nI0802 09:26:42.879169       1 service.go:275] Service webhook-3975/e2e-test-webhook updated: 1 ports\nI0802 09:26:42.879776       1 service.go:390] Adding new service port \"webhook-3975/e2e-test-webhook\" at 100.69.160.98:8443/TCP\nI0802 09:26:42.879843       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:42.918790       1 proxier.go:826] syncProxyRules took 39.560033ms\nI0802 09:26:42.919694       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:42.985182       1 proxier.go:826] syncProxyRules took 66.348277ms\nI0802 09:26:44.027201       1 service.go:275] Service services-5870/service-headless-toggled updated: 1 ports\nI0802 09:26:44.027765       1 service.go:390] Adding new service port \"services-5870/service-headless-toggled\" at 100.66.82.148:80/TCP\nI0802 09:26:44.027844       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:44.063969       1 proxier.go:826] syncProxyRules took 36.723167ms\nI0802 09:26:46.298153       1 service.go:275] Service services-1470/up-down-1 updated: 0 ports\nI0802 09:26:46.298769       1 service.go:415] Removing service port \"services-1470/up-down-1\"\nI0802 09:26:46.298840       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:46.358324       1 proxier.go:826] syncProxyRules took 60.134523ms\nI0802 09:26:46.908460       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:46.937172       1 proxier.go:826] syncProxyRules took 29.173061ms\nI0802 09:26:47.431211       1 service.go:275] Service webhook-3975/e2e-test-webhook updated: 0 ports\nI0802 09:26:47.431831       1 service.go:415] Removing service port \"webhook-3975/e2e-test-webhook\"\nI0802 09:26:47.431927       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:47.469971       1 proxier.go:826] syncProxyRules took 38.722539ms\nI0802 09:26:48.470865       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:48.539377       1 proxier.go:826] syncProxyRules took 69.126035ms\nI0802 09:27:03.388104       1 service.go:275] Service services-1470/up-down-3 updated: 1 ports\nI0802 09:27:03.388789       1 service.go:390] Adding new service port \"services-1470/up-down-3\" at 100.64.1.233:80/TCP\nI0802 09:27:03.388860       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:03.423436       1 proxier.go:826] syncProxyRules took 35.295948ms\nI0802 09:27:03.424280       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:03.470975       1 proxier.go:826] syncProxyRules took 47.504178ms\nI0802 09:27:05.572087       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:05.626723       1 proxier.go:826] syncProxyRules took 55.618399ms\nI0802 09:27:05.752160       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:05.793212       1 proxier.go:826] syncProxyRules took 41.786645ms\nI0802 09:27:06.345818       1 service.go:275] Service services-5870/service-headless-toggled updated: 0 ports\nI0802 09:27:06.793815       1 service.go:415] Removing service port \"services-5870/service-headless-toggled\"\nI0802 09:27:06.793993       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:06.823444       1 proxier.go:826] syncProxyRules took 30.101505ms\nI0802 09:27:08.951737       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-attacher updated: 1 ports\nI0802 09:27:08.952423       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-attacher:dummy\" at 100.64.190.82:12345/TCP\nI0802 09:27:08.952497       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:09.010445       1 proxier.go:826] syncProxyRules took 58.577944ms\nI0802 09:27:09.013408       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:09.053994       1 proxier.go:826] syncProxyRules took 43.514233ms\nI0802 09:27:09.535212       1 service.go:275] Service provisioning-2971-8427/csi-hostpathplugin updated: 1 ports\nI0802 09:27:09.918766       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-provisioner updated: 1 ports\nI0802 09:27:10.054770       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpathplugin:dummy\" at 100.65.218.93:12345/TCP\nI0802 09:27:10.054801       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-provisioner:dummy\" at 100.66.137.241:12345/TCP\nI0802 09:27:10.054903       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:10.088867       1 proxier.go:826] syncProxyRules took 34.671081ms\nI0802 09:27:10.304771       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-resizer updated: 1 ports\nI0802 09:27:10.694804       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:27:11.089501       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-resizer:dummy\" at 100.67.174.173:12345/TCP\nI0802 09:27:11.089533       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-snapshotter:dummy\" at 100.68.17.221:12345/TCP\nI0802 09:27:11.089601       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:11.118316       1 proxier.go:826] syncProxyRules took 29.268194ms\nI0802 09:27:17.838553       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:17.878423       1 proxier.go:826] syncProxyRules took 40.391458ms\nI0802 09:27:18.833593       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:18.876488       1 proxier.go:826] syncProxyRules took 43.472746ms\nI0802 09:27:19.836691       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-attacher updated: 0 ports\nI0802 09:27:19.837567       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-attacher:dummy\"\nI0802 09:27:19.837655       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:19.868872       1 proxier.go:826] syncProxyRules took 32.14219ms\nI0802 09:27:20.307501       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:20.423295       1 service.go:275] Service volume-expand-5125-7028/csi-hostpathplugin updated: 0 ports\nI0802 09:27:20.463291       1 proxier.go:826] syncProxyRules took 156.730037ms\nI0802 09:27:20.813133       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-provisioner updated: 0 ports\nI0802 09:27:21.205060       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-resizer updated: 0 ports\nI0802 09:27:21.205674       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpathplugin:dummy\"\nI0802 09:27:21.205713       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-provisioner:dummy\"\nI0802 09:27:21.205722       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-resizer:dummy\"\nI0802 09:27:21.205820       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:21.246235       1 proxier.go:826] syncProxyRules took 41.065455ms\nI0802 09:27:21.596328       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:27:22.246814       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-snapshotter:dummy\"\nI0802 09:27:22.246962       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:22.276365       1 proxier.go:826] syncProxyRules took 30.057766ms\nI0802 09:27:23.433473       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:23.464928       1 proxier.go:826] syncProxyRules took 31.994598ms\nI0802 09:27:23.710298       1 service.go:275] Service webhook-8390/e2e-test-webhook updated: 1 ports\nI0802 09:27:24.465593       1 service.go:390] Adding new service port \"webhook-8390/e2e-test-webhook\" at 100.66.116.29:8443/TCP\nI0802 09:27:24.465700       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:24.494315       1 proxier.go:826] syncProxyRules took 29.25794ms\nI0802 09:27:31.644639       1 service.go:275] Service dns-5822/dns-test-service-3 updated: 1 ports\nI0802 09:27:31.646260       1 service.go:390] Adding new service port \"dns-5822/dns-test-service-3:http\" at 100.65.38.43:80/TCP\nI0802 09:27:31.646535       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:31.676938       1 proxier.go:826] syncProxyRules took 32.261207ms\nI0802 09:27:33.142631       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:33.181440       1 service.go:275] Service services-1470/up-down-2 updated: 0 ports\nI0802 09:27:33.193397       1 proxier.go:826] syncProxyRules took 51.538659ms\nI0802 09:27:33.194113       1 service.go:415] Removing service port \"services-1470/up-down-2\"\nI0802 09:27:33.194216       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:33.203575       1 service.go:275] Service services-1470/up-down-3 updated: 0 ports\nI0802 09:27:33.236681       1 proxier.go:826] syncProxyRules took 43.244665ms\nI0802 09:27:34.237559       1 service.go:415] Removing service port \"services-1470/up-down-3\"\nI0802 09:27:34.237659       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:34.287208       1 proxier.go:826] syncProxyRules took 50.394769ms\nI0802 09:27:37.364099       1 service.go:275] Service dns-5822/dns-test-service-3 updated: 0 ports\nI0802 09:27:37.364664       1 service.go:415] Removing service port \"dns-5822/dns-test-service-3:http\"\nI0802 09:27:37.364735       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:37.431181       1 proxier.go:826] syncProxyRules took 67.044226ms\nI0802 09:27:38.408665       1 service.go:275] Service webhook-8390/e2e-test-webhook updated: 0 ports\nI0802 09:27:38.409239       1 service.go:415] Removing service port \"webhook-8390/e2e-test-webhook\"\nI0802 09:27:38.409289       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:38.449538       1 proxier.go:826] syncProxyRules took 40.839112ms\nI0802 09:27:38.450128       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:38.478718       1 proxier.go:826] syncProxyRules took 29.143497ms\nI0802 09:27:39.871798       1 service.go:275] Service webhook-4152/e2e-test-webhook updated: 1 ports\nI0802 09:27:39.872371       1 service.go:390] Adding new service port \"webhook-4152/e2e-test-webhook\" at 100.66.210.237:8443/TCP\nI0802 09:27:39.872456       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:39.907221       1 proxier.go:826] syncProxyRules took 35.387801ms\nI0802 09:27:40.908471       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:40.953057       1 proxier.go:826] syncProxyRules took 45.666632ms\nI0802 09:27:42.979385       1 service.go:275] Service webhook-4152/e2e-test-webhook updated: 0 ports\nI0802 09:27:42.979976       1 service.go:415] Removing service port \"webhook-4152/e2e-test-webhook\"\nI0802 09:27:42.980073       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:43.028575       1 proxier.go:826] syncProxyRules took 49.152036ms\nI0802 09:27:43.451598       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:43.493863       1 proxier.go:826] syncProxyRules took 42.756109ms\nI0802 09:28:09.157182       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-attacher updated: 1 ports\nI0802 09:28:09.157718       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-attacher:dummy\" at 100.65.180.222:12345/TCP\nI0802 09:28:09.157811       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:09.188655       1 proxier.go:826] syncProxyRules took 31.434009ms\nI0802 09:28:09.189175       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:09.217539       1 proxier.go:826] syncProxyRules took 28.850237ms\nI0802 09:28:09.732525       1 service.go:275] Service provisioning-4508-9194/csi-hostpathplugin updated: 1 ports\nI0802 09:28:10.117704       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-provisioner updated: 1 ports\nI0802 09:28:10.218262       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpathplugin:dummy\" at 100.65.224.153:12345/TCP\nI0802 09:28:10.218294       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-provisioner:dummy\" at 100.68.168.136:12345/TCP\nI0802 09:28:10.218370       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:10.253297       1 proxier.go:826] syncProxyRules took 35.626965ms\nI0802 09:28:10.503010       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-resizer updated: 1 ports\nI0802 09:28:10.888363       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:28:11.172285       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-resizer:dummy\" at 100.64.131.120:12345/TCP\nI0802 09:28:11.172321       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-snapshotter:dummy\" at 100.71.16.190:12345/TCP\nI0802 09:28:11.172415       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:11.234356       1 proxier.go:826] syncProxyRules took 62.807933ms\nI0802 09:28:12.364345       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:12.439544       1 proxier.go:826] syncProxyRules took 75.883175ms\nI0802 09:28:12.684972       1 service.go:275] Service volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4 updated: 1 ports\nI0802 09:28:13.440445       1 service.go:390] Adding new service port \"volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4\" at 100.66.79.76:1/TCP\nI0802 09:28:13.440594       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:13.507491       1 proxier.go:826] syncProxyRules took 67.815901ms\nI0802 09:28:13.792464       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-attacher updated: 0 ports\nI0802 09:28:14.255126       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-attacher:dummy\"\nI0802 09:28:14.255249       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:14.304249       1 proxier.go:826] syncProxyRules took 49.729081ms\nI0802 09:28:14.389544       1 service.go:275] Service provisioning-2971-8427/csi-hostpathplugin updated: 0 ports\nI0802 09:28:14.783684       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-provisioner updated: 0 ports\nI0802 09:28:15.174659       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-resizer updated: 0 ports\nI0802 09:28:15.175306       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpathplugin:dummy\"\nI0802 09:28:15.175327       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-provisioner:dummy\"\nI0802 09:28:15.175335       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-resizer:dummy\"\nI0802 09:28:15.175420       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:15.228342       1 proxier.go:826] syncProxyRules took 53.631206ms\nI0802 09:28:15.572190       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:28:16.049229       1 service.go:275] Service volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4 updated: 0 ports\nI0802 09:28:16.229000       1 service.go:415] Removing service port \"provisioning-2971-8427/csi-hostpath-snapshotter:dummy\"\nI0802 09:28:16.229032       1 service.go:415] Removing service port \"volume-provisioning-4199/glusterfs-dynamic-4a7e9e77-8307-4e54-b024-19bba81d7cc4\"\nI0802 09:28:16.229131       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:16.257710       1 proxier.go:826] syncProxyRules took 29.22876ms\nI0802 09:28:19.378096       1 service.go:275] Service services-878/service-proxy-toggled updated: 1 ports\nI0802 09:28:19.378694       1 service.go:390] Adding new service port \"services-878/service-proxy-toggled\" at 100.69.249.190:80/TCP\nI0802 09:28:19.378768       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:19.437520       1 proxier.go:826] syncProxyRules took 59.387572ms\nI0802 09:28:19.438183       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:19.491667       1 proxier.go:826] syncProxyRules took 54.110771ms\nI0802 09:28:19.812055       1 service.go:275] Service webhook-3845/e2e-test-webhook updated: 1 ports\nI0802 09:28:20.492466       1 service.go:390] Adding new service port \"webhook-3845/e2e-test-webhook\" at 100.65.164.170:8443/TCP\nI0802 09:28:20.492563       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:20.555089       1 proxier.go:826] syncProxyRules took 63.24606ms\nI0802 09:28:21.557364       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:21.607127       1 proxier.go:826] syncProxyRules took 51.903094ms\nI0802 09:28:25.976765       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:26.024545       1 proxier.go:826] syncProxyRules took 48.240686ms\nI0802 09:28:37.456777       1 service.go:275] Service webhook-3845/e2e-test-webhook updated: 0 ports\nI0802 09:28:37.457438       1 service.go:415] Removing service port \"webhook-3845/e2e-test-webhook\"\nI0802 09:28:37.457524       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:37.485203       1 proxier.go:826] syncProxyRules took 28.371276ms\nI0802 09:28:37.485796       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:37.514256       1 proxier.go:826] syncProxyRules took 28.992417ms\nI0802 09:28:40.106102       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-attacher updated: 0 ports\nI0802 09:28:40.106694       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-attacher:dummy\"\nI0802 09:28:40.106777       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:40.172355       1 proxier.go:826] syncProxyRules took 66.19638ms\nI0802 09:28:40.173010       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:40.231040       1 proxier.go:826] syncProxyRules took 58.645755ms\nI0802 09:28:40.285331       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-attacher updated: 1 ports\nI0802 09:28:40.688523       1 service.go:275] Service provisioning-4508-9194/csi-hostpathplugin updated: 0 ports\nI0802 09:28:40.859872       1 service.go:275] Service volume-expand-5280-1128/csi-hostpathplugin updated: 1 ports\nI0802 09:28:41.091497       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-provisioner updated: 0 ports\nI0802 09:28:41.233506       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpathplugin:dummy\"\nI0802 09:28:41.233574       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpathplugin:dummy\" at 100.69.248.233:12345/TCP\nI0802 09:28:41.233584       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-provisioner:dummy\"\nI0802 09:28:41.233597       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-attacher:dummy\" at 100.68.184.116:12345/TCP\nI0802 09:28:41.233692       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:41.263684       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-provisioner updated: 1 ports\nI0802 09:28:41.309214       1 proxier.go:826] syncProxyRules took 76.239868ms\nI0802 09:28:41.487431       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-resizer updated: 0 ports\nI0802 09:28:41.742749       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-resizer updated: 1 ports\nI0802 09:28:41.953257       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:28:42.135718       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:28:42.136307       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-provisioner:dummy\" at 100.70.52.135:12345/TCP\nI0802 09:28:42.136331       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-resizer:dummy\"\nI0802 09:28:42.136347       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-resizer:dummy\" at 100.66.253.214:12345/TCP\nI0802 09:28:42.136356       1 service.go:415] Removing service port \"provisioning-4508-9194/csi-hostpath-snapshotter:dummy\"\nI0802 09:28:42.136369       1 service.go:390] Adding new service port \"volume-expand-5280-1128/csi-hostpath-snapshotter:dummy\" at 100.65.54.148:12345/TCP\nI0802 09:28:42.136468       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:42.187169       1 proxier.go:826] syncProxyRules took 51.410399ms\nI0802 09:28:43.187937       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:43.226511       1 proxier.go:826] syncProxyRules took 39.257057ms\nI0802 09:28:45.718497       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:45.752520       1 proxier.go:826] syncProxyRules took 34.759176ms\nI0802 09:28:47.119529       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:47.154407       1 proxier.go:826] syncProxyRules took 35.279488ms\nI0802 09:28:49.116381       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:49.147807       1 proxier.go:826] syncProxyRules took 31.890447ms\nI0802 09:28:49.521373       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:49.570351       1 proxier.go:826] syncProxyRules took 49.512652ms\nI0802 09:28:50.402519       1 service.go:275] Service services-878/service-proxy-toggled updated: 0 ports\nI0802 09:28:50.403129       1 service.go:415] Removing service port \"services-878/service-proxy-toggled\"\nI0802 09:28:50.403224       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:50.438278       1 proxier.go:826] syncProxyRules took 35.718963ms\nI0802 09:28:51.438960       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:51.479193       1 proxier.go:826] syncProxyRules took 40.783803ms\nI0802 09:28:57.619448       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:57.619758       1 service.go:275] Service services-878/service-proxy-toggled updated: 1 ports\nI0802 09:28:57.671452       1 proxier.go:826] syncProxyRules took 52.595441ms\nI0802 09:28:57.671979       1 service.go:390] Adding new service port \"services-878/service-proxy-toggled\" at 100.69.249.190:80/TCP\nI0802 09:28:57.672048       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:57.716734       1 proxier.go:826] syncProxyRules took 45.235036ms\nI0802 09:29:10.459374       1 service.go:275] Service webhook-6678/e2e-test-webhook updated: 1 ports\nI0802 09:29:10.460036       1 service.go:390] Adding new service port \"webhook-6678/e2e-test-webhook\" at 100.67.74.125:8443/TCP\nI0802 09:29:10.460109       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:10.493998       1 proxier.go:826] syncProxyRules took 34.566007ms\nI0802 09:29:10.494622       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:10.526298       1 proxier.go:826] syncProxyRules took 32.26709ms\nI0802 09:29:13.209247       1 service.go:275] Service webhook-6678/e2e-test-webhook updated: 0 ports\nI0802 09:29:13.209752       1 service.go:415] Removing service port \"webhook-6678/e2e-test-webhook\"\nI0802 09:29:13.209826       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:13.245949       1 proxier.go:826] syncProxyRules took 36.666711ms\nI0802 09:29:13.768703       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:13.829328       1 proxier.go:826] syncProxyRules took 61.206825ms\nI0802 09:29:18.937623       1 service.go:275] Service services-6068/multi-endpoint-test updated: 2 ports\nI0802 09:29:18.938103       1 service.go:390] Adding new service port \"services-6068/multi-endpoint-test:portname1\" at 100.66.255.138:80/TCP\nI0802 09:29:18.938128       1 service.go:390] Adding new service port \"services-6068/multi-endpoint-test:portname2\" at 100.66.255.138:81/TCP\nI0802 09:29:18.938191       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:18.966339       1 proxier.go:826] syncProxyRules took 28.67316ms\nI0802 09:29:18.966829       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:18.995832       1 proxier.go:826] syncProxyRules took 29.458053ms\nI0802 09:29:19.942394       1 service.go:275] Service services-878/service-proxy-toggled updated: 0 ports\nI0802 09:29:19.942868       1 service.go:415] Removing service port \"services-878/service-proxy-toggled\"\nI0802 09:29:19.942977       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:19.971536       1 proxier.go:826] syncProxyRules took 29.108431ms\nI0802 09:29:20.972315       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:21.126131       1 proxier.go:826] syncProxyRules took 154.45097ms\nI0802 09:29:22.126773       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:22.186655       1 proxier.go:826] syncProxyRules took 60.373278ms\nI0802 09:29:23.741026       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:23.774211       1 proxier.go:826] syncProxyRules took 33.678333ms\nI0802 09:29:24.516479       1 service.go:275] Service dns-4443/test-service-2 updated: 1 ports\nI0802 09:29:24.516956       1 service.go:390] Adding new service port \"dns-4443/test-service-2:http\" at 100.71.195.231:80/TCP\nI0802 09:29:24.517012       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:24.545110       1 proxier.go:826] syncProxyRules took 28.588364ms\nI0802 09:29:25.194246       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:25.238544       1 proxier.go:826] syncProxyRules took 45.081987ms\nI0802 09:29:26.260846       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:26.316411       1 proxier.go:826] syncProxyRules took 56.166671ms\nI0802 09:29:27.015124       1 service.go:275] Service services-6068/multi-endpoint-test updated: 0 ports\nI0802 09:29:27.015602       1 service.go:415] Removing service port \"services-6068/multi-endpoint-test:portname2\"\nI0802 09:29:27.015616       1 service.go:415] Removing service port \"services-6068/multi-endpoint-test:portname1\"\nI0802 09:29:27.015679       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:27.057545       1 proxier.go:826] syncProxyRules took 42.380726ms\nI0802 09:29:28.061860       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:28.193736       1 proxier.go:826] syncProxyRules took 136.05677ms\nI0802 09:29:43.214769       1 service.go:275] Service provisioning-16-2189/csi-hostpath-attacher updated: 1 ports\nI0802 09:29:43.215362       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-attacher:dummy\" at 100.64.227.251:12345/TCP\nI0802 09:29:43.215437       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:43.270572       1 proxier.go:826] syncProxyRules took 55.7653ms\nI0802 09:29:43.271651       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:43.422787       1 proxier.go:826] syncProxyRules took 151.999712ms\nI0802 09:29:43.794143       1 service.go:275] Service provisioning-16-2189/csi-hostpathplugin updated: 1 ports\nI0802 09:29:44.181643       1 service.go:275] Service provisioning-16-2189/csi-hostpath-provisioner updated: 1 ports\nI0802 09:29:44.391233       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpathplugin:dummy\" at 100.68.142.127:12345/TCP\nI0802 09:29:44.391266       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-provisioner:dummy\" at 100.67.83.87:12345/TCP\nI0802 09:29:44.391365       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:44.476944       1 proxier.go:826] syncProxyRules took 86.320232ms\nI0802 09:29:44.575197       1 service.go:275] Service provisioning-16-2189/csi-hostpath-resizer updated: 1 ports\nI0802 09:29:44.957294       1 service.go:275] Service provisioning-16-2189/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:29:45.479567       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-resizer:dummy\" at 100.64.33.107:12345/TCP\nI0802 09:29:45.479597       1 service.go:390] Adding new service port \"provisioning-16-2189/csi-hostpath-snapshotter:dummy\" at 100.67.177.99:12345/TCP\nI0802 09:29:45.479675       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:45.627309       1 proxier.go:826] syncProxyRules took 148.332907ms\nI0802 09:29:46.925061       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-attacher updated: 0 ports\nI0802 09:29:46.925708       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-attacher:dummy\"\nI0802 09:29:46.925791       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:46.971221       1 proxier.go:826] syncProxyRules took 46.12083ms\nI0802 09:29:47.509201       1 service.go:275] Service volume-expand-5280-1128/csi-hostpathplugin updated: 0 ports\nI0802 09:29:47.509785       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpathplugin:dummy\"\nI0802 09:29:47.509860       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:47.549078       1 proxier.go:826] syncProxyRules took 39.841764ms\nI0802 09:29:47.956243       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-provisioner updated: 0 ports\nI0802 09:29:48.311246       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-resizer updated: 0 ports\nI0802 09:29:48.311742       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-provisioner:dummy\"\nI0802 09:29:48.311767       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-resizer:dummy\"\nI0802 09:29:48.311859       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:48.353042       1 proxier.go:826] syncProxyRules took 41.757595ms\nI0802 09:29:48.708723       1 service.go:275] Service volume-expand-5280-1128/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:29:49.354224       1 service.go:415] Removing service port \"volume-expand-5280-1128/csi-hostpath-snapshotter:dummy\"\nI0802 09:29:49.354368       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:49.399146       1 proxier.go:826] syncProxyRules took 45.607853ms\nI0802 09:29:50.582947       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:50.616486       1 proxier.go:826] syncProxyRules took 34.114525ms\nI0802 09:29:51.617235       1 proxier.go:871] Syncing iptables rules\nI0802 09:29:51.647981       1 proxier.go:826] syncProxyRules took 31.367756ms\nI0802 09:30:05.296276       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:05.337839       1 proxier.go:826] syncProxyRules took 42.139155ms\nI0802 09:30:05.490013       1 service.go:275] Service dns-4443/test-service-2 updated: 0 ports\nI0802 09:30:05.490813       1 service.go:415] Removing service port \"dns-4443/test-service-2:http\"\nI0802 09:30:05.491083       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:05.535272       1 proxier.go:826] syncProxyRules took 45.051235ms\nI0802 09:30:06.546097       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:06.614869       1 proxier.go:826] syncProxyRules took 78.783226ms\nI0802 09:30:10.257847       1 service.go:275] Service volumemode-685-3249/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:10.258543       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-attacher:dummy\" at 100.71.70.1:12345/TCP\nI0802 09:30:10.258632       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:10.288948       1 proxier.go:826] syncProxyRules took 31.041485ms\nI0802 09:30:10.289488       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:10.320160       1 proxier.go:826] syncProxyRules took 31.174524ms\nI0802 09:30:10.840448       1 service.go:275] Service volumemode-685-3249/csi-hostpathplugin updated: 1 ports\nI0802 09:30:11.232062       1 service.go:275] Service volumemode-685-3249/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:11.320799       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpathplugin:dummy\" at 100.64.160.4:12345/TCP\nI0802 09:30:11.320834       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-provisioner:dummy\" at 100.64.110.51:12345/TCP\nI0802 09:30:11.320930       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:11.362945       1 proxier.go:826] syncProxyRules took 42.689264ms\nI0802 09:30:11.624473       1 service.go:275] Service volumemode-685-3249/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:12.015031       1 service.go:275] Service volumemode-685-3249/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:12.363642       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-resizer:dummy\" at 100.68.58.197:12345/TCP\nI0802 09:30:12.363678       1 service.go:390] Adding new service port \"volumemode-685-3249/csi-hostpath-snapshotter:dummy\" at 100.67.184.34:12345/TCP\nI0802 09:30:12.363756       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:12.395362       1 proxier.go:826] syncProxyRules took 32.234681ms\nI0802 09:30:14.199698       1 service.go:275] Service provisioning-16-2189/csi-hostpath-attacher updated: 0 ports\nI0802 09:30:14.200341       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-attacher:dummy\"\nI0802 09:30:14.200446       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:14.228338       1 proxier.go:826] syncProxyRules took 28.556671ms\nI0802 09:30:14.662739       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:14.712610       1 proxier.go:826] syncProxyRules took 50.38452ms\nI0802 09:30:14.801561       1 service.go:275] Service provisioning-16-2189/csi-hostpathplugin updated: 0 ports\nI0802 09:30:15.197123       1 service.go:275] Service provisioning-16-2189/csi-hostpath-provisioner updated: 0 ports\nI0802 09:30:15.590395       1 service.go:275] Service provisioning-16-2189/csi-hostpath-resizer updated: 0 ports\nI0802 09:30:15.591035       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-provisioner:dummy\"\nI0802 09:30:15.591059       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-resizer:dummy\"\nI0802 09:30:15.591069       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpathplugin:dummy\"\nI0802 09:30:15.591189       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:15.644189       1 proxier.go:826] syncProxyRules took 53.75296ms\nI0802 09:30:15.990816       1 service.go:275] Service provisioning-16-2189/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:30:16.382434       1 service.go:415] Removing service port \"provisioning-16-2189/csi-hostpath-snapshotter:dummy\"\nI0802 09:30:16.382546       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:16.411315       1 proxier.go:826] syncProxyRules took 29.291931ms\nI0802 09:30:17.381079       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:17.408923       1 proxier.go:826] syncProxyRules took 28.311566ms\nI0802 09:30:18.409999       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:18.475535       1 proxier.go:826] syncProxyRules took 66.444097ms\nI0802 09:30:24.697011       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:24.697460       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-attacher:dummy\" at 100.69.156.225:12345/TCP\nI0802 09:30:24.697567       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:24.729650       1 proxier.go:826] syncProxyRules took 32.605863ms\nI0802 09:30:24.730828       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:24.764805       1 proxier.go:826] syncProxyRules took 35.119813ms\nI0802 09:30:25.273994       1 service.go:275] Service volumemode-5208-5908/csi-hostpathplugin updated: 1 ports\nI0802 09:30:25.662246       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:25.765425       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-provisioner:dummy\" at 100.71.179.180:12345/TCP\nI0802 09:30:25.765456       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpathplugin:dummy\" at 100.71.116.224:12345/TCP\nI0802 09:30:25.765529       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:25.798003       1 proxier.go:826] syncProxyRules took 33.066459ms\nI0802 09:30:26.049112       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:26.432170       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:26.798532       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-resizer:dummy\" at 100.70.139.57:12345/TCP\nI0802 09:30:26.798562       1 service.go:390] Adding new service port \"volumemode-5208-5908/csi-hostpath-snapshotter:dummy\" at 100.65.43.24:12345/TCP\nI0802 09:30:26.798616       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:26.837949       1 proxier.go:826] syncProxyRules took 39.860235ms\nI0802 09:30:29.915666       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-attacher updated: 1 ports\nI0802 09:30:29.916266       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-attacher:dummy\" at 100.65.85.203:12345/TCP\nI0802 09:30:29.916348       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:29.953962       1 proxier.go:826] syncProxyRules took 38.255224ms\nI0802 09:30:29.954491       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:29.989427       1 proxier.go:826] syncProxyRules took 35.429769ms\nI0802 09:30:30.500281       1 service.go:275] Service ephemeral-2872-6578/csi-hostpathplugin updated: 1 ports\nI0802 09:30:30.885918       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-provisioner updated: 1 ports\nI0802 09:30:30.990135       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpathplugin:dummy\" at 100.67.116.240:12345/TCP\nI0802 09:30:30.990168       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-provisioner:dummy\" at 100.68.12.225:12345/TCP\nI0802 09:30:30.990308       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:31.019120       1 proxier.go:826] syncProxyRules took 29.546585ms\nI0802 09:30:31.278720       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-resizer updated: 1 ports\nI0802 09:30:31.669840       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:30:32.019827       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-resizer:dummy\" at 100.67.86.111:12345/TCP\nI0802 09:30:32.019871       1 service.go:390] Adding new service port \"ephemeral-2872-6578/csi-hostpath-snapshotter:dummy\" at 100.65.59.49:12345/TCP\nI0802 09:30:32.020023       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:32.079616       1 proxier.go:826] syncProxyRules took 60.30744ms\nI0802 09:30:33.995479       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:34.032718       1 proxier.go:826] syncProxyRules took 37.960275ms\nI0802 09:30:34.375344       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:34.409861       1 proxier.go:826] syncProxyRules took 35.744974ms\nI0802 09:30:35.571088       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:35.611632       1 proxier.go:826] syncProxyRules took 41.134073ms\nI0802 09:30:36.612395       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:36.651286       1 proxier.go:826] syncProxyRules took 39.497891ms\nI0802 09:30:38.378333       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:38.408456       1 proxier.go:826] syncProxyRules took 30.681382ms\nI0802 09:30:38.781446       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:38.812229       1 proxier.go:826] syncProxyRules took 31.354791ms\nI0802 09:30:39.580987       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:39.613381       1 proxier.go:826] syncProxyRules took 33.019141ms\nI0802 09:30:41.377504       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:41.408448       1 proxier.go:826] syncProxyRules took 31.497653ms\nI0802 09:30:41.781789       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:41.813998       1 proxier.go:826] syncProxyRules took 32.753792ms\nI0802 09:30:44.966020       1 service.go:275] Service kubectl-8312/agnhost-primary updated: 1 ports\nI0802 09:30:44.967639       1 service.go:390] Adding new service port \"kubectl-8312/agnhost-primary\" at 100.69.109.236:6379/TCP\nI0802 09:30:44.968110       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:45.041058       1 proxier.go:826] syncProxyRules took 74.997917ms\nI0802 09:30:45.042122       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:45.106239       1 proxier.go:826] syncProxyRules took 65.137934ms\nI0802 09:30:46.106992       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:46.136710       1 proxier.go:826] syncProxyRules took 30.342866ms\nI0802 09:30:46.705412       1 service.go:275] Service volumemode-685-3249/csi-hostpath-attacher updated: 0 ports\nI0802 09:30:47.137340       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-attacher:dummy\"\nI0802 09:30:47.137459       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:47.166646       1 proxier.go:826] syncProxyRules took 29.803506ms\nI0802 09:30:47.295220       1 service.go:275] Service volumemode-685-3249/csi-hostpathplugin updated: 0 ports\nI0802 09:30:47.696628       1 service.go:275] Service volumemode-685-3249/csi-hostpath-provisioner updated: 0 ports\nI0802 09:30:48.096020       1 service.go:275] Service volumemode-685-3249/csi-hostpath-resizer updated: 0 ports\nI0802 09:30:48.096841       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpathplugin:dummy\"\nI0802 09:30:48.096866       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-provisioner:dummy\"\nI0802 09:30:48.096876       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-resizer:dummy\"\nI0802 09:30:48.097010       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:48.142763       1 proxier.go:826] syncProxyRules took 46.703487ms\nI0802 09:30:48.500751       1 service.go:275] Service volumemode-685-3249/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:30:49.143449       1 service.go:415] Removing service port \"volumemode-685-3249/csi-hostpath-snapshotter:dummy\"\nI0802 09:30:49.143572       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:49.172645       1 proxier.go:826] syncProxyRules took 29.714364ms\nI0802 09:30:58.833933       1 service.go:275] Service kubectl-8312/agnhost-primary updated: 0 ports\nI0802 09:30:58.834505       1 service.go:415] Removing service port \"kubectl-8312/agnhost-primary\"\nI0802 09:30:58.834599       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:58.866977       1 proxier.go:826] syncProxyRules took 33.005288ms\nI0802 09:30:58.867578       1 proxier.go:871] Syncing iptables rules\nI0802 09:30:58.916198       1 proxier.go:826] syncProxyRules took 49.187373ms\nI0802 09:31:07.361390       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-attacher updated: 0 ports\nI0802 09:31:07.362619       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-attacher:dummy\"\nI0802 09:31:07.362699       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:07.397597       1 proxier.go:826] syncProxyRules took 36.163243ms\nI0802 09:31:07.398256       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:07.431963       1 proxier.go:826] syncProxyRules took 34.329037ms\nI0802 09:31:07.944170       1 service.go:275] Service volumemode-5208-5908/csi-hostpathplugin updated: 0 ports\nI0802 09:31:08.332855       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-provisioner updated: 0 ports\nI0802 09:31:08.433017       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpathplugin:dummy\"\nI0802 09:31:08.433055       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-provisioner:dummy\"\nI0802 09:31:08.433157       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:08.491690       1 proxier.go:826] syncProxyRules took 59.472869ms\nI0802 09:31:08.722143       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-resizer updated: 0 ports\nI0802 09:31:09.113556       1 service.go:275] Service volumemode-5208-5908/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:31:09.492359       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-resizer:dummy\"\nI0802 09:31:09.492393       1 service.go:415] Removing service port \"volumemode-5208-5908/csi-hostpath-snapshotter:dummy\"\nI0802 09:31:09.492492       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:09.532734       1 proxier.go:826] syncProxyRules took 40.901767ms\nI0802 09:31:39.573692       1 service.go:275] Service services-2137/endpoint-test2 updated: 1 ports\nI0802 09:31:39.574165       1 service.go:390] Adding new service port \"services-2137/endpoint-test2\" at 100.65.50.254:80/TCP\nI0802 09:31:39.574240       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:39.603611       1 proxier.go:826] syncProxyRules took 29.880985ms\nI0802 09:31:39.604132       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:39.632183       1 proxier.go:826] syncProxyRules took 28.535612ms\nI0802 09:31:39.875271       1 service.go:275] Service services-295/tolerate-unready updated: 1 ports\nI0802 09:31:40.587405       1 service.go:390] Adding new service port \"services-295/tolerate-unready:http\" at 100.67.114.99:80/TCP\nI0802 09:31:40.587500       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:40.627709       1 proxier.go:826] syncProxyRules took 40.828389ms\nI0802 09:31:48.987621       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:49.025116       1 proxier.go:826] syncProxyRules took 38.261431ms\nI0802 09:31:53.388496       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:53.425247       1 proxier.go:826] syncProxyRules took 38.128173ms\nI0802 09:31:54.885583       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:54.918761       1 proxier.go:826] syncProxyRules took 33.812417ms\nI0802 09:31:55.872602       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:55.927715       1 proxier.go:826] syncProxyRules took 55.855932ms\nI0802 09:31:55.928803       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:55.980871       1 proxier.go:826] syncProxyRules took 53.114337ms\nI0802 09:31:56.981930       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:57.010655       1 proxier.go:826] syncProxyRules took 29.565977ms\nI0802 09:31:57.638962       1 service.go:275] Service services-2137/endpoint-test2 updated: 0 ports\nI0802 09:31:58.011591       1 service.go:415] Removing service port \"services-2137/endpoint-test2\"\nI0802 09:31:58.011692       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:58.047476       1 proxier.go:826] syncProxyRules took 36.684211ms\nI0802 09:31:59.000635       1 service.go:275] Service services-295/tolerate-unready updated: 0 ports\nI0802 09:31:59.001508       1 service.go:415] Removing service port \"services-295/tolerate-unready:http\"\nI0802 09:31:59.001625       1 proxier.go:871] Syncing iptables rules\nI0802 09:31:59.034656       1 proxier.go:826] syncProxyRules took 33.98163ms\nI0802 09:32:00.036279       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:00.080673       1 proxier.go:826] syncProxyRules took 45.877479ms\nI0802 09:32:00.586521       1 service.go:275] Service services-9578/hairpin-test updated: 1 ports\nI0802 09:32:01.081635       1 service.go:390] Adding new service port \"services-9578/hairpin-test\" at 100.71.179.245:8080/TCP\nI0802 09:32:01.081752       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:01.150715       1 proxier.go:826] syncProxyRules took 69.887506ms\nI0802 09:32:01.996395       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:02.046916       1 proxier.go:826] syncProxyRules took 51.388181ms\nI0802 09:32:12.442061       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-attacher updated: 0 ports\nI0802 09:32:12.442620       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-attacher:dummy\"\nI0802 09:32:12.442707       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:12.471669       1 proxier.go:826] syncProxyRules took 29.567427ms\nI0802 09:32:12.632407       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-attacher updated: 1 ports\nI0802 09:32:12.632963       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-attacher:dummy\" at 100.68.103.208:12345/TCP\nI0802 09:32:12.633047       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:12.662584       1 proxier.go:826] syncProxyRules took 30.126727ms\nI0802 09:32:13.044785       1 service.go:275] Service ephemeral-2872-6578/csi-hostpathplugin updated: 0 ports\nI0802 09:32:13.209482       1 service.go:275] Service volume-expand-8616-827/csi-hostpathplugin updated: 1 ports\nI0802 09:32:13.437209       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-provisioner updated: 0 ports\nI0802 09:32:13.449029       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpathplugin:dummy\"\nI0802 09:32:13.449072       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpathplugin:dummy\" at 100.66.147.83:12345/TCP\nI0802 09:32:13.449083       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-provisioner:dummy\"\nI0802 09:32:13.449183       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:13.488325       1 proxier.go:826] syncProxyRules took 39.865957ms\nI0802 09:32:13.597022       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-provisioner updated: 1 ports\nI0802 09:32:13.832097       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-resizer updated: 0 ports\nI0802 09:32:13.983273       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-resizer updated: 1 ports\nI0802 09:32:14.232127       1 service.go:275] Service ephemeral-2872-6578/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:32:14.377283       1 service.go:275] Service volume-expand-8616-827/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:32:14.488965       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-snapshotter:dummy\"\nI0802 09:32:14.489007       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-snapshotter:dummy\" at 100.65.206.78:12345/TCP\nI0802 09:32:14.489021       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-provisioner:dummy\" at 100.69.16.155:12345/TCP\nI0802 09:32:14.489031       1 service.go:415] Removing service port \"ephemeral-2872-6578/csi-hostpath-resizer:dummy\"\nI0802 09:32:14.489053       1 service.go:390] Adding new service port \"volume-expand-8616-827/csi-hostpath-resizer:dummy\" at 100.65.68.228:12345/TCP\nI0802 09:32:14.489162       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:14.534270       1 proxier.go:826] syncProxyRules took 45.81489ms\nI0802 09:32:14.551089       1 service.go:275] Service services-9578/hairpin-test updated: 0 ports\nI0802 09:32:15.535829       1 service.go:415] Removing service port \"services-9578/hairpin-test\"\nI0802 09:32:15.535969       1 proxier.go:871] Syncing iptables rules\nI0802 09:32:15.594486       1 proxier.go:826] syncProxyRules took 59.455518ms\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-47-13.ap-southeast-2.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-48-162.ap-southeast-2.compute.internal ====\nI0802 09:16:14.090909       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0802 09:16:14.091630       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0802 09:16:14.091647       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0802 09:16:14.091655       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0802 09:16:14.091663       1 flags.go:59] FLAG: --cleanup=\"false\"\nI0802 09:16:14.091669       1 flags.go:59] FLAG: --cleanup-ipvs=\"true\"\nI0802 09:16:14.091674       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0802 09:16:14.091690       1 flags.go:59] FLAG: --config=\"\"\nI0802 09:16:14.091695       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0802 09:16:14.091704       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0802 09:16:14.091713       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0802 09:16:14.091719       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0802 09:16:14.091725       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0802 09:16:14.091733       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0802 09:16:14.091741       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0802 09:16:14.091749       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0802 09:16:14.091756       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0802 09:16:14.091782       1 flags.go:59] FLAG: --help=\"false\"\nI0802 09:16:14.091787       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-48-162.ap-southeast-2.compute.internal\"\nI0802 09:16:14.091792       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0802 09:16:14.091797       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0802 09:16:14.091803       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0802 09:16:14.091809       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0802 09:16:14.091836       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0802 09:16:14.091843       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0802 09:16:14.091848       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0802 09:16:14.091853       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0802 09:16:14.091858       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0802 09:16:14.091862       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0802 09:16:14.091868       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0802 09:16:14.091875       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0802 09:16:14.091882       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0802 09:16:14.091889       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0802 09:16:14.091899       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0802 09:16:14.091904       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0802 09:16:14.091913       1 flags.go:59] FLAG: --log-dir=\"\"\nI0802 09:16:14.091918       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0802 09:16:14.091924       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0802 09:16:14.091929       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0802 09:16:14.091934       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0802 09:16:14.091939       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0802 09:16:14.091944       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-8608f95a98-9381a.test-cncf-aws.k8s.io\"\nI0802 09:16:14.091951       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0802 09:16:14.091957       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0802 09:16:14.091962       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0802 09:16:14.091968       1 flags.go:59] FLAG: --one-output=\"false\"\nI0802 09:16:14.091973       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0802 09:16:14.091978       1 flags.go:59] FLAG: --profiling=\"false\"\nI0802 09:16:14.091983       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0802 09:16:14.091990       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0802 09:16:14.091996       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0802 09:16:14.092001       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0802 09:16:14.092006       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0802 09:16:14.092011       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0802 09:16:14.092016       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0802 09:16:14.092022       1 flags.go:59] FLAG: --v=\"2\"\nI0802 09:16:14.092026       1 flags.go:59] FLAG: --version=\"false\"\nI0802 09:16:14.092041       1 flags.go:59] FLAG: --vmodule=\"\"\nI0802 09:16:14.092045       1 flags.go:59] FLAG: --write-config-to=\"\"\nW0802 09:16:14.092053       1 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0802 09:16:14.092132       1 feature_gate.go:243] feature gates: &{map[]}\nI0802 09:16:14.092237       1 feature_gate.go:243] feature gates: &{map[]}\nI0802 09:16:14.200215       1 node.go:172] Successfully retrieved node IP: 172.20.48.162\nI0802 09:16:14.200253       1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.20.48.162), assume IPv4 operation\nW0802 09:16:14.264779       1 server_others.go:584] Unknown proxy mode \"\", assuming iptables proxy\nI0802 09:16:14.264899       1 server_others.go:182] DetectLocalMode: 'ClusterCIDR'\nI0802 09:16:14.264915       1 server_others.go:185] Using iptables Proxier.\nI0802 09:16:14.264977       1 utils.go:321] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0802 09:16:14.265022       1 proxier.go:287] iptables(IPv4) masquerade mark: 0x00004000\nI0802 09:16:14.265058       1 proxier.go:334] iptables(IPv4) sync params: minSyncPeriod=1s, syncPeriod=30s, burstSyncs=2\nI0802 09:16:14.265091       1 proxier.go:346] iptables(IPv4) supports --random-fully\nI0802 09:16:14.265779       1 server.go:650] Version: v1.20.9\nI0802 09:16:14.266389       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI0802 09:16:14.267063       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0802 09:16:14.267354       1 mount_linux.go:188] Detected OS without systemd\nI0802 09:16:14.268070       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI0802 09:16:14.272382       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0802 09:16:14.272443       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0802 09:16:14.274477       1 config.go:315] Starting service config controller\nI0802 09:16:14.274579       1 shared_informer.go:240] Waiting for caches to sync for service config\nI0802 09:16:14.274508       1 config.go:224] Starting endpoint slice config controller\nI0802 09:16:14.274761       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0802 09:16:14.274774       1 reflector.go:219] Starting reflector *v1.Service (15m0s) from k8s.io/client-go/informers/factory.go:134\nI0802 09:16:14.274992       1 reflector.go:219] Starting reflector *v1beta1.EndpointSlice (15m0s) from k8s.io/client-go/informers/factory.go:134\nI0802 09:16:14.277167       1 service.go:275] Service default/kubernetes updated: 1 ports\nI0802 09:16:14.277211       1 service.go:275] Service kube-system/kube-dns updated: 3 ports\nI0802 09:16:14.374808       1 shared_informer.go:247] Caches are synced for service config \nI0802 09:16:14.375051       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0802 09:16:14.376002       1 proxier.go:818] Not syncing iptables until Services and Endpoints have been received from master\nI0802 09:16:14.376231       1 service.go:390] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0802 09:16:14.376258       1 service.go:390] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0802 09:16:14.376271       1 service.go:390] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0802 09:16:14.376319       1 service.go:390] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0802 09:16:14.376379       1 proxier.go:871] Syncing iptables rules\nI0802 09:16:14.419647       1 proxier.go:826] syncProxyRules took 43.604096ms\nI0802 09:16:17.147438       1 proxier.go:858] Stale udp service kube-system/kube-dns:dns -> 100.64.0.10\nI0802 09:16:17.147478       1 proxier.go:871] Syncing iptables rules\nI0802 09:16:17.187101       1 proxier.go:826] syncProxyRules took 39.974918ms\nI0802 09:16:31.971131       1 proxier.go:871] Syncing iptables rules\nI0802 09:16:32.001365       1 proxier.go:826] syncProxyRules took 30.518017ms\nI0802 09:19:12.785027       1 service.go:275] Service crd-webhook-9522/e2e-test-crd-conversion-webhook updated: 1 ports\nI0802 09:19:12.785424       1 service.go:390] Adding new service port \"crd-webhook-9522/e2e-test-crd-conversion-webhook\" at 100.68.250.245:9443/TCP\nI0802 09:19:12.785473       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:12.810642       1 proxier.go:826] syncProxyRules took 25.511167ms\nI0802 09:19:12.810936       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:12.841277       1 proxier.go:826] syncProxyRules took 30.599196ms\nI0802 09:19:17.346428       1 service.go:275] Service crd-webhook-9522/e2e-test-crd-conversion-webhook updated: 0 ports\nI0802 09:19:17.346799       1 service.go:415] Removing service port \"crd-webhook-9522/e2e-test-crd-conversion-webhook\"\nI0802 09:19:17.346852       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:17.375540       1 proxier.go:826] syncProxyRules took 29.076917ms\nI0802 09:19:17.376115       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:17.400863       1 proxier.go:826] syncProxyRules took 25.291634ms\nI0802 09:19:28.759661       1 service.go:275] Service webhook-5205/e2e-test-webhook updated: 1 ports\nI0802 09:19:28.760019       1 service.go:390] Adding new service port \"webhook-5205/e2e-test-webhook\" at 100.65.222.225:8443/TCP\nI0802 09:19:28.760103       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:28.784753       1 proxier.go:826] syncProxyRules took 25.059978ms\nI0802 09:19:28.785168       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:28.809863       1 proxier.go:826] syncProxyRules took 25.077565ms\nI0802 09:19:32.374199       1 service.go:275] Service webhook-5205/e2e-test-webhook updated: 0 ports\nI0802 09:19:32.374581       1 service.go:415] Removing service port \"webhook-5205/e2e-test-webhook\"\nI0802 09:19:32.374633       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:32.409495       1 proxier.go:826] syncProxyRules took 35.261559ms\nI0802 09:19:32.409939       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:32.441921       1 proxier.go:826] syncProxyRules took 32.360649ms\nI0802 09:19:32.511844       1 service.go:275] Service webhook-4678/e2e-test-webhook updated: 1 ports\nI0802 09:19:33.442397       1 service.go:390] Adding new service port \"webhook-4678/e2e-test-webhook\" at 100.68.62.124:8443/TCP\nI0802 09:19:33.442468       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:33.475985       1 proxier.go:826] syncProxyRules took 33.96686ms\nI0802 09:19:36.796736       1 service.go:275] Service webhook-4678/e2e-test-webhook updated: 0 ports\nI0802 09:19:36.797121       1 service.go:415] Removing service port \"webhook-4678/e2e-test-webhook\"\nI0802 09:19:36.797175       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:36.822678       1 proxier.go:826] syncProxyRules took 25.907161ms\nI0802 09:19:37.262082       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:37.292682       1 proxier.go:826] syncProxyRules took 31.240476ms\nI0802 09:19:52.222592       1 service.go:275] Service volume-2336-5573/csi-hostpath-attacher updated: 1 ports\nI0802 09:19:52.222993       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpath-attacher:dummy\" at 100.65.192.59:12345/TCP\nI0802 09:19:52.223045       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:52.256215       1 proxier.go:826] syncProxyRules took 33.588812ms\nI0802 09:19:52.256698       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:52.283701       1 proxier.go:826] syncProxyRules took 27.449802ms\nI0802 09:19:52.797355       1 service.go:275] Service volume-2336-5573/csi-hostpathplugin updated: 1 ports\nI0802 09:19:53.246780       1 service.go:275] Service volume-2336-5573/csi-hostpath-provisioner updated: 1 ports\nI0802 09:19:53.247150       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpathplugin:dummy\" at 100.65.252.165:12345/TCP\nI0802 09:19:53.247171       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpath-provisioner:dummy\" at 100.71.131.80:12345/TCP\nI0802 09:19:53.247227       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:53.391417       1 proxier.go:826] syncProxyRules took 144.594765ms\nI0802 09:19:53.577159       1 service.go:275] Service volume-2336-5573/csi-hostpath-resizer updated: 1 ports\nI0802 09:19:53.969398       1 service.go:275] Service volume-2336-5573/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:19:54.391842       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpath-resizer:dummy\" at 100.70.190.18:12345/TCP\nI0802 09:19:54.391878       1 service.go:390] Adding new service port \"volume-2336-5573/csi-hostpath-snapshotter:dummy\" at 100.69.168.99:12345/TCP\nI0802 09:19:54.391935       1 proxier.go:871] Syncing iptables rules\nI0802 09:19:54.421034       1 proxier.go:826] syncProxyRules took 29.485636ms\nI0802 09:20:01.631568       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:01.661935       1 proxier.go:826] syncProxyRules took 30.732028ms\nI0802 09:20:05.021873       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:05.047531       1 proxier.go:826] syncProxyRules took 25.925089ms\nI0802 09:20:06.639985       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:06.746994       1 proxier.go:826] syncProxyRules took 107.278749ms\nI0802 09:20:11.341525       1 service.go:275] Service conntrack-8962/svc-udp updated: 1 ports\nI0802 09:20:11.342029       1 service.go:390] Adding new service port \"conntrack-8962/svc-udp:udp\" at 100.69.106.222:80/UDP\nI0802 09:20:11.342089       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:11.378802       1 proxier.go:826] syncProxyRules took 37.223893ms\nI0802 09:20:11.379173       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:11.406887       1 proxier.go:826] syncProxyRules took 28.050036ms\nI0802 09:20:12.754756       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:12.792004       1 proxier.go:826] syncProxyRules took 37.609428ms\nI0802 09:20:13.526818       1 service.go:275] Service webhook-5840/e2e-test-webhook updated: 1 ports\nI0802 09:20:13.527933       1 service.go:390] Adding new service port \"webhook-5840/e2e-test-webhook\" at 100.71.172.43:8443/TCP\nI0802 09:20:13.528006       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:13.571831       1 proxier.go:826] syncProxyRules took 44.955148ms\nI0802 09:20:14.572362       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:14.614534       1 proxier.go:826] syncProxyRules took 42.570876ms\nI0802 09:20:16.003571       1 proxier.go:858] Stale udp service conntrack-8962/svc-udp:udp -> 100.69.106.222\nI0802 09:20:16.003617       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:16.051173       1 proxier.go:826] syncProxyRules took 48.024175ms\nI0802 09:20:17.191376       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-attacher updated: 1 ports\nI0802 09:20:17.191758       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpath-attacher:dummy\" at 100.69.5.207:12345/TCP\nI0802 09:20:17.191820       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:17.227317       1 proxier.go:826] syncProxyRules took 35.906763ms\nI0802 09:20:17.399211       1 service.go:275] Service services-7758/nodeport-reuse updated: 1 ports\nI0802 09:20:17.399644       1 service.go:390] Adding new service port \"services-7758/nodeport-reuse\" at 100.64.24.239:80/TCP\nI0802 09:20:17.399711       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:17.423272       1 service.go:275] Service webhook-5840/e2e-test-webhook updated: 0 ports\nI0802 09:20:17.463403       1 proxier.go:1715] Opened local port \"nodePort for services-7758/nodeport-reuse\" (:31870/tcp)\nI0802 09:20:17.468718       1 proxier.go:826] syncProxyRules took 69.469535ms\nI0802 09:20:17.582252       1 service.go:275] Service services-7758/nodeport-reuse updated: 0 ports\nI0802 09:20:17.768075       1 service.go:275] Service ephemeral-2112-9083/csi-hostpathplugin updated: 1 ports\nI0802 09:20:18.155059       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-provisioner updated: 1 ports\nI0802 09:20:18.469200       1 service.go:415] Removing service port \"services-7758/nodeport-reuse\"\nI0802 09:20:18.469247       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpathplugin:dummy\" at 100.68.89.107:12345/TCP\nI0802 09:20:18.469263       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpath-provisioner:dummy\" at 100.71.186.96:12345/TCP\nI0802 09:20:18.469312       1 service.go:415] Removing service port \"webhook-5840/e2e-test-webhook\"\nI0802 09:20:18.469388       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:18.516518       1 proxier.go:826] syncProxyRules took 47.669657ms\nI0802 09:20:18.541503       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-resizer updated: 1 ports\nI0802 09:20:18.926950       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:20:19.517048       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpath-resizer:dummy\" at 100.68.254.250:12345/TCP\nI0802 09:20:19.517091       1 service.go:390] Adding new service port \"ephemeral-2112-9083/csi-hostpath-snapshotter:dummy\" at 100.71.116.61:12345/TCP\nI0802 09:20:19.517165       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:19.543561       1 proxier.go:826] syncProxyRules took 26.840541ms\nI0802 09:20:20.544186       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:20.571053       1 proxier.go:826] syncProxyRules took 27.247643ms\nI0802 09:20:22.308912       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:22.366467       1 proxier.go:826] syncProxyRules took 57.927931ms\nI0802 09:20:22.449704       1 service.go:275] Service services-7758/nodeport-reuse updated: 1 ports\nI0802 09:20:22.450098       1 service.go:390] Adding new service port \"services-7758/nodeport-reuse\" at 100.68.238.177:80/TCP\nI0802 09:20:22.450173       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:22.485453       1 proxier.go:1715] Opened local port \"nodePort for services-7758/nodeport-reuse\" (:31870/tcp)\nI0802 09:20:22.491526       1 proxier.go:826] syncProxyRules took 41.781302ms\nI0802 09:20:22.641262       1 service.go:275] Service services-7758/nodeport-reuse updated: 0 ports\nI0802 09:20:23.492056       1 service.go:415] Removing service port \"services-7758/nodeport-reuse\"\nI0802 09:20:23.492136       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:23.546919       1 proxier.go:826] syncProxyRules took 55.247815ms\nI0802 09:20:24.774334       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:24.801182       1 proxier.go:826] syncProxyRules took 27.222673ms\nI0802 09:20:26.218863       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:26.265291       1 proxier.go:826] syncProxyRules took 46.758683ms\nI0802 09:20:29.724050       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:29.751858       1 proxier.go:826] syncProxyRules took 28.171801ms\nI0802 09:20:33.884319       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:33.912160       1 proxier.go:826] syncProxyRules took 28.192355ms\nI0802 09:20:37.017571       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:37.044545       1 proxier.go:826] syncProxyRules took 27.309065ms\nI0802 09:20:45.176552       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-attacher updated: 1 ports\nI0802 09:20:45.176980       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpath-attacher:dummy\" at 100.70.25.117:12345/TCP\nI0802 09:20:45.177039       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:45.228148       1 proxier.go:826] syncProxyRules took 51.559777ms\nI0802 09:20:45.228603       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:45.278378       1 proxier.go:826] syncProxyRules took 50.19153ms\nI0802 09:20:45.757363       1 service.go:275] Service volume-expand-6275-6517/csi-hostpathplugin updated: 1 ports\nI0802 09:20:46.146554       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-provisioner updated: 1 ports\nI0802 09:20:46.278979       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpathplugin:dummy\" at 100.66.211.72:12345/TCP\nI0802 09:20:46.279011       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpath-provisioner:dummy\" at 100.68.28.63:12345/TCP\nI0802 09:20:46.279071       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:46.306973       1 proxier.go:826] syncProxyRules took 28.455038ms\nI0802 09:20:46.529016       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-resizer updated: 1 ports\nI0802 09:20:46.914740       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:20:47.307558       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpath-resizer:dummy\" at 100.69.212.248:12345/TCP\nI0802 09:20:47.307593       1 service.go:390] Adding new service port \"volume-expand-6275-6517/csi-hostpath-snapshotter:dummy\" at 100.64.49.220:12345/TCP\nI0802 09:20:47.307666       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:47.352617       1 proxier.go:826] syncProxyRules took 45.511608ms\nI0802 09:20:49.992695       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:50.017458       1 service.go:275] Service conntrack-8962/svc-udp updated: 0 ports\nI0802 09:20:50.035873       1 proxier.go:826] syncProxyRules took 43.796303ms\nI0802 09:20:50.038439       1 service.go:415] Removing service port \"conntrack-8962/svc-udp:udp\"\nI0802 09:20:50.038515       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:50.084124       1 proxier.go:826] syncProxyRules took 48.216462ms\nI0802 09:20:50.996347       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:51.026015       1 proxier.go:826] syncProxyRules took 30.090724ms\nI0802 09:20:52.026567       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:52.054490       1 proxier.go:826] syncProxyRules took 28.384637ms\nI0802 09:20:52.612520       1 service.go:275] Service kubectl-6830/rm2 updated: 1 ports\nI0802 09:20:53.055242       1 service.go:390] Adding new service port \"kubectl-6830/rm2\" at 100.67.16.37:1234/TCP\nI0802 09:20:53.055375       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:53.084348       1 proxier.go:826] syncProxyRules took 29.736843ms\nI0802 09:20:56.062708       1 service.go:275] Service kubectl-6830/rm3 updated: 1 ports\nI0802 09:20:56.063143       1 service.go:390] Adding new service port \"kubectl-6830/rm3\" at 100.67.68.194:2345/TCP\nI0802 09:20:56.063216       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:56.091190       1 proxier.go:826] syncProxyRules took 28.44578ms\nI0802 09:20:56.091765       1 proxier.go:871] Syncing iptables rules\nI0802 09:20:56.121074       1 proxier.go:826] syncProxyRules took 29.852313ms\nI0802 09:21:04.095209       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:04.114167       1 service.go:275] Service kubectl-6830/rm2 updated: 0 ports\nI0802 09:21:04.126712       1 service.go:275] Service kubectl-6830/rm3 updated: 0 ports\nI0802 09:21:04.145898       1 proxier.go:826] syncProxyRules took 51.192947ms\nI0802 09:21:04.146656       1 service.go:415] Removing service port \"kubectl-6830/rm2\"\nI0802 09:21:04.146683       1 service.go:415] Removing service port \"kubectl-6830/rm3\"\nI0802 09:21:04.146752       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:04.192163       1 proxier.go:826] syncProxyRules took 46.23089ms\nI0802 09:21:11.739715       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-attacher updated: 1 ports\nI0802 09:21:11.740471       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpath-attacher:dummy\" at 100.66.46.66:12345/TCP\nI0802 09:21:11.740551       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:11.788816       1 proxier.go:826] syncProxyRules took 49.055276ms\nI0802 09:21:11.789379       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:11.836980       1 proxier.go:826] syncProxyRules took 48.116064ms\nI0802 09:21:12.324049       1 service.go:275] Service provisioning-2246-2145/csi-hostpathplugin updated: 1 ports\nI0802 09:21:12.745725       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-provisioner updated: 1 ports\nI0802 09:21:12.746330       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpathplugin:dummy\" at 100.70.43.46:12345/TCP\nI0802 09:21:12.746357       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpath-provisioner:dummy\" at 100.65.113.83:12345/TCP\nI0802 09:21:12.746424       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:12.850249       1 proxier.go:826] syncProxyRules took 104.462086ms\nI0802 09:21:13.132404       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-resizer updated: 1 ports\nI0802 09:21:13.523627       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:21:13.850944       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpath-resizer:dummy\" at 100.71.47.96:12345/TCP\nI0802 09:21:13.850978       1 service.go:390] Adding new service port \"provisioning-2246-2145/csi-hostpath-snapshotter:dummy\" at 100.66.113.34:12345/TCP\nI0802 09:21:13.851051       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:13.903143       1 proxier.go:826] syncProxyRules took 52.803142ms\nI0802 09:21:15.774406       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-attacher updated: 0 ports\nI0802 09:21:15.775051       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpath-attacher:dummy\"\nI0802 09:21:15.775121       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:15.858380       1 proxier.go:826] syncProxyRules took 83.934133ms\nI0802 09:21:15.859066       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:15.939634       1 proxier.go:826] syncProxyRules took 81.219225ms\nI0802 09:21:16.372953       1 service.go:275] Service ephemeral-2112-9083/csi-hostpathplugin updated: 0 ports\nI0802 09:21:16.770572       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-provisioner updated: 0 ports\nI0802 09:21:16.785794       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpath-provisioner:dummy\"\nI0802 09:21:16.785826       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpathplugin:dummy\"\nI0802 09:21:16.785900       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:16.828062       1 proxier.go:826] syncProxyRules took 42.811348ms\nI0802 09:21:17.250764       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-resizer updated: 0 ports\nI0802 09:21:17.585710       1 service.go:275] Service ephemeral-2112-9083/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:21:17.828799       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpath-snapshotter:dummy\"\nI0802 09:21:17.828836       1 service.go:415] Removing service port \"ephemeral-2112-9083/csi-hostpath-resizer:dummy\"\nI0802 09:21:17.828915       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:17.890373       1 proxier.go:826] syncProxyRules took 62.241281ms\nI0802 09:21:18.901167       1 service.go:275] Service volume-6703-8059/csi-hostpath-attacher updated: 1 ports\nI0802 09:21:18.901883       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpath-attacher:dummy\" at 100.67.115.199:12345/TCP\nI0802 09:21:18.901950       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:18.939204       1 proxier.go:826] syncProxyRules took 37.997692ms\nI0802 09:21:19.548438       1 service.go:275] Service volume-6703-8059/csi-hostpathplugin updated: 1 ports\nI0802 09:21:19.871815       1 service.go:275] Service volume-6703-8059/csi-hostpath-provisioner updated: 1 ports\nI0802 09:21:19.872410       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpathplugin:dummy\" at 100.67.227.1:12345/TCP\nI0802 09:21:19.872436       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpath-provisioner:dummy\" at 100.70.131.227:12345/TCP\nI0802 09:21:19.872496       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:19.910725       1 proxier.go:826] syncProxyRules took 38.862365ms\nI0802 09:21:20.271488       1 service.go:275] Service volume-6703-8059/csi-hostpath-resizer updated: 1 ports\nI0802 09:21:20.657720       1 service.go:275] Service volume-6703-8059/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:21:20.911933       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpath-resizer:dummy\" at 100.67.189.41:12345/TCP\nI0802 09:21:20.911961       1 service.go:390] Adding new service port \"volume-6703-8059/csi-hostpath-snapshotter:dummy\" at 100.69.0.142:12345/TCP\nI0802 09:21:20.912039       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:20.968277       1 proxier.go:826] syncProxyRules took 57.030086ms\nI0802 09:21:24.928749       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:24.980491       1 proxier.go:826] syncProxyRules took 81.153147ms\nI0802 09:21:25.790857       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:25.822760       1 proxier.go:826] syncProxyRules took 32.395941ms\nI0802 09:21:27.191861       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:27.256044       1 proxier.go:826] syncProxyRules took 64.820424ms\nI0802 09:21:27.988240       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:28.017437       1 proxier.go:826] syncProxyRules took 29.719225ms\nI0802 09:21:29.018098       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:29.051596       1 proxier.go:826] syncProxyRules took 34.050309ms\nI0802 09:21:29.641840       1 service.go:275] Service volume-2336-5573/csi-hostpath-attacher updated: 0 ports\nI0802 09:21:29.642430       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpath-attacher:dummy\"\nI0802 09:21:29.642506       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:29.675817       1 proxier.go:826] syncProxyRules took 33.933847ms\nI0802 09:21:30.252616       1 service.go:275] Service volume-2336-5573/csi-hostpathplugin updated: 0 ports\nI0802 09:21:30.253095       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpathplugin:dummy\"\nI0802 09:21:30.253197       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:30.287265       1 proxier.go:826] syncProxyRules took 34.615556ms\nI0802 09:21:30.654356       1 service.go:275] Service volume-2336-5573/csi-hostpath-provisioner updated: 0 ports\nI0802 09:21:31.044582       1 service.go:275] Service volume-2336-5573/csi-hostpath-resizer updated: 0 ports\nI0802 09:21:31.288040       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpath-provisioner:dummy\"\nI0802 09:21:31.288075       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpath-resizer:dummy\"\nI0802 09:21:31.288163       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:31.324790       1 proxier.go:826] syncProxyRules took 37.329211ms\nI0802 09:21:31.434202       1 service.go:275] Service volume-2336-5573/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:21:32.326136       1 service.go:415] Removing service port \"volume-2336-5573/csi-hostpath-snapshotter:dummy\"\nI0802 09:21:32.326319       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:32.379222       1 proxier.go:826] syncProxyRules took 53.862517ms\nI0802 09:21:32.571827       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-attacher updated: 0 ports\nI0802 09:21:33.145976       1 service.go:275] Service volume-expand-6275-6517/csi-hostpathplugin updated: 0 ports\nI0802 09:21:33.380551       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpath-attacher:dummy\"\nI0802 09:21:33.380589       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpathplugin:dummy\"\nI0802 09:21:33.380678       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:33.410112       1 proxier.go:826] syncProxyRules took 30.767566ms\nI0802 09:21:33.605584       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-provisioner updated: 0 ports\nI0802 09:21:33.929771       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-resizer updated: 0 ports\nI0802 09:21:34.319084       1 service.go:275] Service volume-expand-6275-6517/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:21:34.319757       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpath-provisioner:dummy\"\nI0802 09:21:34.319795       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpath-resizer:dummy\"\nI0802 09:21:34.319806       1 service.go:415] Removing service port \"volume-expand-6275-6517/csi-hostpath-snapshotter:dummy\"\nI0802 09:21:34.319899       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:34.369125       1 proxier.go:826] syncProxyRules took 49.976996ms\nI0802 09:21:35.369757       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:35.397070       1 proxier.go:826] syncProxyRules took 27.812733ms\nI0802 09:21:46.762543       1 proxier.go:871] Syncing iptables rules\nI0802 09:21:46.791181       1 proxier.go:826] syncProxyRules took 29.0978ms\nI0802 09:22:01.407399       1 service.go:275] Service services-6870/nodeport-update-service updated: 1 ports\nI0802 09:22:01.407974       1 service.go:390] Adding new service port \"services-6870/nodeport-update-service\" at 100.70.108.206:80/TCP\nI0802 09:22:01.408042       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:01.437149       1 proxier.go:826] syncProxyRules took 29.710279ms\nI0802 09:22:01.437629       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:01.465204       1 proxier.go:826] syncProxyRules took 28.018791ms\nI0802 09:22:01.791143       1 service.go:275] Service services-6870/nodeport-update-service updated: 1 ports\nI0802 09:22:02.465857       1 service.go:390] Adding new service port \"services-6870/nodeport-update-service:tcp-port\" at 100.70.108.206:80/TCP\nI0802 09:22:02.465890       1 service.go:415] Removing service port \"services-6870/nodeport-update-service\"\nI0802 09:22:02.465944       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:02.491223       1 proxier.go:1715] Opened local port \"nodePort for services-6870/nodeport-update-service:tcp-port\" (:32759/tcp)\nI0802 09:22:02.496350       1 proxier.go:826] syncProxyRules took 30.977425ms\nI0802 09:22:04.331363       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:04.365493       1 proxier.go:826] syncProxyRules took 34.621461ms\nI0802 09:22:06.008302       1 service.go:275] Service webhook-6750/e2e-test-webhook updated: 1 ports\nI0802 09:22:06.008778       1 service.go:390] Adding new service port \"webhook-6750/e2e-test-webhook\" at 100.71.154.99:8443/TCP\nI0802 09:22:06.008850       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:06.072179       1 proxier.go:826] syncProxyRules took 63.835075ms\nI0802 09:22:06.072809       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:06.103534       1 proxier.go:826] syncProxyRules took 31.316631ms\nI0802 09:22:07.555593       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:07.583130       1 proxier.go:826] syncProxyRules took 28.047184ms\nI0802 09:22:09.981184       1 service.go:275] Service webhook-6750/e2e-test-webhook updated: 0 ports\nI0802 09:22:09.981704       1 service.go:415] Removing service port \"webhook-6750/e2e-test-webhook\"\nI0802 09:22:09.981777       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:10.010410       1 proxier.go:826] syncProxyRules took 29.190284ms\nI0802 09:22:10.499426       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:10.544983       1 proxier.go:826] syncProxyRules took 46.190685ms\nI0802 09:22:18.503490       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 1 ports\nI0802 09:22:18.503973       1 service.go:390] Adding new service port \"deployment-4314/test-rolling-update-with-lb\" at 100.71.104.248:80/TCP\nI0802 09:22:18.504046       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:18.527678       1 proxier.go:1715] Opened local port \"nodePort for deployment-4314/test-rolling-update-with-lb\" (:31127/tcp)\nI0802 09:22:18.532040       1 service_health.go:98] Opening healthcheck \"deployment-4314/test-rolling-update-with-lb\" on port 31777\nI0802 09:22:18.532182       1 proxier.go:826] syncProxyRules took 28.658127ms\nI0802 09:22:18.532772       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:18.560917       1 proxier.go:826] syncProxyRules took 28.710514ms\nI0802 09:22:19.753819       1 service.go:275] Service dns-7425/test-service-2 updated: 1 ports\nI0802 09:22:19.754427       1 service.go:390] Adding new service port \"dns-7425/test-service-2:http\" at 100.70.0.153:80/TCP\nI0802 09:22:19.754768       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:19.785358       1 proxier.go:826] syncProxyRules took 31.500318ms\nI0802 09:22:20.673025       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 1 ports\nI0802 09:22:20.673569       1 service.go:392] Updating existing service port \"deployment-4314/test-rolling-update-with-lb\" at 100.71.104.248:80/TCP\nI0802 09:22:20.673653       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:20.715953       1 proxier.go:826] syncProxyRules took 42.88864ms\nI0802 09:22:21.298133       1 service.go:275] Service services-6002/nodeport-test updated: 1 ports\nI0802 09:22:21.717978       1 service.go:390] Adding new service port \"services-6002/nodeport-test:http\" at 100.64.32.143:80/TCP\nI0802 09:22:21.718185       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:21.747072       1 proxier.go:1715] Opened local port \"nodePort for services-6002/nodeport-test:http\" (:32041/tcp)\nI0802 09:22:21.754987       1 proxier.go:826] syncProxyRules took 38.902998ms\nI0802 09:22:23.133890       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:23.193567       1 proxier.go:826] syncProxyRules took 60.132556ms\nI0802 09:22:24.194243       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:24.222990       1 proxier.go:826] syncProxyRules took 29.283745ms\nI0802 09:22:24.251093       1 service.go:275] Service services-6870/nodeport-update-service updated: 2 ports\nI0802 09:22:24.540342       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-attacher updated: 0 ports\nI0802 09:22:24.540845       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpath-attacher:dummy\"\nI0802 09:22:24.540876       1 service.go:392] Updating existing service port \"services-6870/nodeport-update-service:tcp-port\" at 100.70.108.206:80/TCP\nI0802 09:22:24.540898       1 service.go:390] Adding new service port \"services-6870/nodeport-update-service:udp-port\" at 100.70.108.206:80/UDP\nI0802 09:22:24.541012       1 proxier.go:858] Stale udp service services-6870/nodeport-update-service:udp-port -> 100.70.108.206\nI0802 09:22:24.541070       1 proxier.go:865] Stale udp service NodePort services-6870/nodeport-update-service:udp-port -> 31486\nI0802 09:22:24.541098       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:24.575383       1 proxier.go:1715] Opened local port \"nodePort for services-6870/nodeport-update-service:tcp-port\" (:31290/tcp)\nI0802 09:22:24.575744       1 proxier.go:1715] Opened local port \"nodePort for services-6870/nodeport-update-service:udp-port\" (:31486/udp)\nI0802 09:22:24.591096       1 proxier.go:826] syncProxyRules took 50.699613ms\nI0802 09:22:25.165090       1 service.go:275] Service provisioning-2246-2145/csi-hostpathplugin updated: 0 ports\nI0802 09:22:25.481415       1 service.go:275] Service webhook-7979/e2e-test-webhook updated: 1 ports\nI0802 09:22:25.566031       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-provisioner updated: 0 ports\nI0802 09:22:25.566549       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpathplugin:dummy\"\nI0802 09:22:25.566584       1 service.go:390] Adding new service port \"webhook-7979/e2e-test-webhook\" at 100.69.140.216:8443/TCP\nI0802 09:22:25.566594       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpath-provisioner:dummy\"\nI0802 09:22:25.566677       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:25.601383       1 proxier.go:826] syncProxyRules took 35.300328ms\nI0802 09:22:25.968394       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-resizer updated: 0 ports\nI0802 09:22:26.366085       1 service.go:275] Service provisioning-2246-2145/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:22:26.602028       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpath-resizer:dummy\"\nI0802 09:22:26.602065       1 service.go:415] Removing service port \"provisioning-2246-2145/csi-hostpath-snapshotter:dummy\"\nI0802 09:22:26.602161       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:26.614412       1 service.go:275] Service webhook-7274/e2e-test-webhook updated: 1 ports\nI0802 09:22:26.634035       1 proxier.go:826] syncProxyRules took 32.4911ms\nI0802 09:22:27.634743       1 service.go:390] Adding new service port \"webhook-7274/e2e-test-webhook\" at 100.64.174.107:8443/TCP\nI0802 09:22:27.634842       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:27.710692       1 proxier.go:826] syncProxyRules took 76.530576ms\nI0802 09:22:29.357232       1 service.go:275] Service webhook-7274/e2e-test-webhook updated: 0 ports\nI0802 09:22:29.357838       1 service.go:415] Removing service port \"webhook-7274/e2e-test-webhook\"\nI0802 09:22:29.357915       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:29.386713       1 proxier.go:826] syncProxyRules took 29.444115ms\nI0802 09:22:29.596583       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:29.653253       1 proxier.go:826] syncProxyRules took 58.66882ms\nI0802 09:22:31.831849       1 service.go:275] Service webhook-7979/e2e-test-webhook updated: 0 ports\nI0802 09:22:31.832390       1 service.go:415] Removing service port \"webhook-7979/e2e-test-webhook\"\nI0802 09:22:31.832471       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:31.873079       1 proxier.go:826] syncProxyRules took 41.196729ms\nI0802 09:22:31.873707       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:31.916072       1 proxier.go:826] syncProxyRules took 42.954523ms\nI0802 09:22:36.131911       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:36.160068       1 proxier.go:826] syncProxyRules took 28.635396ms\nI0802 09:22:38.998423       1 service.go:275] Service endpointslice-7693/example-int-port updated: 1 ports\nI0802 09:22:38.998914       1 service.go:390] Adding new service port \"endpointslice-7693/example-int-port:example\" at 100.69.153.56:80/TCP\nI0802 09:22:38.998991       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:39.046756       1 proxier.go:826] syncProxyRules took 48.294941ms\nI0802 09:22:39.050158       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:39.088847       1 proxier.go:826] syncProxyRules took 42.047199ms\nI0802 09:22:39.235706       1 service.go:275] Service endpointslice-7693/example-named-port updated: 1 ports\nI0802 09:22:39.382937       1 service.go:275] Service endpointslice-7693/example-no-match updated: 1 ports\nI0802 09:22:40.089532       1 service.go:390] Adding new service port \"endpointslice-7693/example-no-match:example-no-match\" at 100.71.66.81:80/TCP\nI0802 09:22:40.089563       1 service.go:390] Adding new service port \"endpointslice-7693/example-named-port:http\" at 100.70.243.177:80/TCP\nI0802 09:22:40.089644       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:40.119021       1 proxier.go:826] syncProxyRules took 29.953535ms\nI0802 09:22:41.119718       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:41.160654       1 proxier.go:826] syncProxyRules took 41.507472ms\nI0802 09:22:43.507563       1 service.go:275] Service webhook-6320/e2e-test-webhook updated: 1 ports\nI0802 09:22:43.508125       1 service.go:390] Adding new service port \"webhook-6320/e2e-test-webhook\" at 100.66.87.153:8443/TCP\nI0802 09:22:43.508200       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:43.537713       1 proxier.go:826] syncProxyRules took 30.116891ms\nI0802 09:22:43.539007       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:43.569270       1 proxier.go:826] syncProxyRules took 31.520436ms\nI0802 09:22:45.936807       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:45.965761       1 proxier.go:826] syncProxyRules took 29.440805ms\nI0802 09:22:45.977945       1 service.go:275] Service services-6002/nodeport-test updated: 0 ports\nI0802 09:22:45.978470       1 service.go:415] Removing service port \"services-6002/nodeport-test:http\"\nI0802 09:22:45.978602       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:46.021493       1 proxier.go:826] syncProxyRules took 43.505309ms\nI0802 09:22:46.376679       1 service.go:275] Service volume-6703-8059/csi-hostpath-attacher updated: 0 ports\nI0802 09:22:46.960614       1 service.go:275] Service volume-6703-8059/csi-hostpathplugin updated: 0 ports\nI0802 09:22:46.961254       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpath-attacher:dummy\"\nI0802 09:22:46.961275       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpathplugin:dummy\"\nI0802 09:22:46.961400       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:46.990564       1 proxier.go:826] syncProxyRules took 29.915547ms\nI0802 09:22:47.364822       1 service.go:275] Service volume-6703-8059/csi-hostpath-provisioner updated: 0 ports\nI0802 09:22:47.756464       1 service.go:275] Service volume-6703-8059/csi-hostpath-resizer updated: 0 ports\nI0802 09:22:47.991125       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpath-provisioner:dummy\"\nI0802 09:22:47.991181       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpath-resizer:dummy\"\nI0802 09:22:47.991423       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:48.020891       1 proxier.go:826] syncProxyRules took 30.199213ms\nI0802 09:22:48.153541       1 service.go:275] Service volume-6703-8059/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:22:48.628368       1 service.go:275] Service webhook-6320/e2e-test-webhook updated: 0 ports\nI0802 09:22:49.021533       1 service.go:415] Removing service port \"volume-6703-8059/csi-hostpath-snapshotter:dummy\"\nI0802 09:22:49.021568       1 service.go:415] Removing service port \"webhook-6320/e2e-test-webhook\"\nI0802 09:22:49.021760       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:49.084416       1 proxier.go:826] syncProxyRules took 63.41285ms\nI0802 09:22:56.334463       1 service.go:275] Service services-2118/externalname-service updated: 1 ports\nI0802 09:22:56.334991       1 service.go:390] Adding new service port \"services-2118/externalname-service:http\" at 100.70.236.135:80/TCP\nI0802 09:22:56.335062       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:56.358398       1 proxier.go:1715] Opened local port \"nodePort for services-2118/externalname-service:http\" (:30683/tcp)\nI0802 09:22:56.362958       1 proxier.go:826] syncProxyRules took 28.46029ms\nI0802 09:22:56.363490       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:56.391613       1 proxier.go:826] syncProxyRules took 28.603475ms\nI0802 09:22:57.838968       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:57.878976       1 proxier.go:826] syncProxyRules took 40.455391ms\nI0802 09:22:58.879752       1 proxier.go:871] Syncing iptables rules\nI0802 09:22:58.917974       1 proxier.go:826] syncProxyRules took 38.850898ms\nI0802 09:23:01.287956       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:01.317930       1 proxier.go:826] syncProxyRules took 30.504683ms\nI0802 09:23:01.478700       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:01.553525       1 proxier.go:826] syncProxyRules took 75.643735ms\nI0802 09:23:01.647765       1 service.go:275] Service services-6870/nodeport-update-service updated: 0 ports\nI0802 09:23:02.290740       1 service.go:415] Removing service port \"services-6870/nodeport-update-service:tcp-port\"\nI0802 09:23:02.290771       1 service.go:415] Removing service port \"services-6870/nodeport-update-service:udp-port\"\nI0802 09:23:02.290862       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:02.377392       1 proxier.go:826] syncProxyRules took 87.07812ms\nI0802 09:23:02.644583       1 service.go:275] Service dns-7425/test-service-2 updated: 0 ports\nI0802 09:23:03.378013       1 service.go:415] Removing service port \"dns-7425/test-service-2:http\"\nI0802 09:23:03.378126       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:03.405535       1 proxier.go:826] syncProxyRules took 28.0034ms\nI0802 09:23:07.556732       1 service.go:275] Service services-6709/clusterip-service updated: 1 ports\nI0802 09:23:07.572965       1 service.go:390] Adding new service port \"services-6709/clusterip-service\" at 100.64.155.117:80/TCP\nI0802 09:23:07.573049       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:07.753483       1 proxier.go:826] syncProxyRules took 196.710363ms\nI0802 09:23:07.754072       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:07.756087       1 service.go:275] Service services-6709/externalsvc updated: 1 ports\nI0802 09:23:07.807504       1 proxier.go:826] syncProxyRules took 53.98198ms\nI0802 09:23:08.808180       1 service.go:390] Adding new service port \"services-6709/externalsvc\" at 100.64.23.153:80/TCP\nI0802 09:23:08.808271       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:08.849398       1 proxier.go:826] syncProxyRules took 41.747074ms\nI0802 09:23:09.599871       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:09.627116       1 proxier.go:826] syncProxyRules took 27.750187ms\nI0802 09:23:11.725600       1 service.go:275] Service services-6709/clusterip-service updated: 0 ports\nI0802 09:23:11.726142       1 service.go:415] Removing service port \"services-6709/clusterip-service\"\nI0802 09:23:11.726228       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:11.762499       1 proxier.go:826] syncProxyRules took 36.856422ms\nI0802 09:23:11.763636       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:11.797534       1 proxier.go:826] syncProxyRules took 34.58286ms\nI0802 09:23:13.921456       1 service.go:275] Service crd-webhook-3792/e2e-test-crd-conversion-webhook updated: 1 ports\nI0802 09:23:13.921992       1 service.go:390] Adding new service port \"crd-webhook-3792/e2e-test-crd-conversion-webhook\" at 100.68.77.198:9443/TCP\nI0802 09:23:13.922063       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:13.969913       1 proxier.go:826] syncProxyRules took 48.420786ms\nI0802 09:23:13.970615       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:14.008928       1 proxier.go:826] syncProxyRules took 38.982609ms\nI0802 09:23:15.679346       1 service.go:275] Service services-2118/externalname-service updated: 0 ports\nI0802 09:23:15.679850       1 service.go:415] Removing service port \"services-2118/externalname-service:http\"\nI0802 09:23:15.679931       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:15.714824       1 proxier.go:826] syncProxyRules took 35.439317ms\nI0802 09:23:16.715855       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:16.748493       1 proxier.go:826] syncProxyRules took 33.511108ms\nI0802 09:23:16.968414       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:16.996369       1 proxier.go:826] syncProxyRules took 28.451898ms\nI0802 09:23:17.660732       1 service.go:275] Service endpointslice-7693/example-int-port updated: 0 ports\nI0802 09:23:17.675461       1 service.go:275] Service endpointslice-7693/example-named-port updated: 0 ports\nI0802 09:23:17.719659       1 service.go:275] Service endpointslice-7693/example-no-match updated: 0 ports\nI0802 09:23:17.794315       1 service.go:275] Service crd-webhook-3792/e2e-test-crd-conversion-webhook updated: 0 ports\nI0802 09:23:17.996899       1 service.go:415] Removing service port \"endpointslice-7693/example-no-match:example-no-match\"\nI0802 09:23:17.996934       1 service.go:415] Removing service port \"crd-webhook-3792/e2e-test-crd-conversion-webhook\"\nI0802 09:23:17.996942       1 service.go:415] Removing service port \"endpointslice-7693/example-int-port:example\"\nI0802 09:23:17.996951       1 service.go:415] Removing service port \"endpointslice-7693/example-named-port:http\"\nI0802 09:23:17.997086       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:18.024816       1 proxier.go:826] syncProxyRules took 28.31967ms\nI0802 09:23:22.156227       1 service.go:275] Service conntrack-3978/svc-udp updated: 1 ports\nI0802 09:23:22.156852       1 service.go:390] Adding new service port \"conntrack-3978/svc-udp:udp\" at 100.69.241.82:80/UDP\nI0802 09:23:22.156955       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:22.180100       1 proxier.go:1715] Opened local port \"nodePort for conntrack-3978/svc-udp:udp\" (:30862/udp)\nI0802 09:23:22.183703       1 proxier.go:826] syncProxyRules took 27.342403ms\nI0802 09:23:22.184309       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:22.210695       1 proxier.go:826] syncProxyRules took 26.944862ms\nI0802 09:23:24.807430       1 service.go:275] Service provisioning-2278-17/csi-hostpath-attacher updated: 1 ports\nI0802 09:23:24.807895       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpath-attacher:dummy\" at 100.70.66.174:12345/TCP\nI0802 09:23:24.807968       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:24.835623       1 proxier.go:826] syncProxyRules took 28.149882ms\nI0802 09:23:24.836081       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:24.863953       1 proxier.go:826] syncProxyRules took 28.280913ms\nI0802 09:23:25.381418       1 service.go:275] Service provisioning-2278-17/csi-hostpathplugin updated: 1 ports\nI0802 09:23:25.764984       1 service.go:275] Service provisioning-2278-17/csi-hostpath-provisioner updated: 1 ports\nI0802 09:23:25.864678       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpath-provisioner:dummy\" at 100.68.177.103:12345/TCP\nI0802 09:23:25.864726       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpathplugin:dummy\" at 100.68.74.21:12345/TCP\nI0802 09:23:25.864822       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:25.902670       1 proxier.go:826] syncProxyRules took 38.583224ms\nI0802 09:23:26.155861       1 service.go:275] Service provisioning-2278-17/csi-hostpath-resizer updated: 1 ports\nI0802 09:23:26.541791       1 service.go:275] Service provisioning-2278-17/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:23:26.903258       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpath-resizer:dummy\" at 100.67.68.161:12345/TCP\nI0802 09:23:26.903304       1 service.go:390] Adding new service port \"provisioning-2278-17/csi-hostpath-snapshotter:dummy\" at 100.68.107.88:12345/TCP\nI0802 09:23:26.903442       1 proxier.go:858] Stale udp service conntrack-3978/svc-udp:udp -> 100.69.241.82\nI0802 09:23:26.903508       1 proxier.go:865] Stale udp service NodePort conntrack-3978/svc-udp:udp -> 30862\nI0802 09:23:26.903531       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:26.944207       1 service.go:275] Service services-6709/externalsvc updated: 0 ports\nI0802 09:23:26.962334       1 proxier.go:826] syncProxyRules took 59.536732ms\nI0802 09:23:27.962929       1 service.go:415] Removing service port \"services-6709/externalsvc\"\nI0802 09:23:27.963096       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:27.993661       1 proxier.go:826] syncProxyRules took 31.183594ms\nI0802 09:23:28.734939       1 service.go:275] Service proxy-4404/proxy-service-b6j8z updated: 4 ports\nI0802 09:23:28.994223       1 service.go:390] Adding new service port \"proxy-4404/proxy-service-b6j8z:portname1\" at 100.69.18.219:80/TCP\nI0802 09:23:28.994254       1 service.go:390] Adding new service port \"proxy-4404/proxy-service-b6j8z:portname2\" at 100.69.18.219:81/TCP\nI0802 09:23:28.994265       1 service.go:390] Adding new service port \"proxy-4404/proxy-service-b6j8z:tlsportname1\" at 100.69.18.219:443/TCP\nI0802 09:23:28.994274       1 service.go:390] Adding new service port \"proxy-4404/proxy-service-b6j8z:tlsportname2\" at 100.69.18.219:444/TCP\nI0802 09:23:28.994362       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:29.021598       1 proxier.go:826] syncProxyRules took 27.788322ms\nI0802 09:23:30.113848       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:30.141941       1 proxier.go:826] syncProxyRules took 28.509503ms\nI0802 09:23:35.875828       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:35.903726       1 proxier.go:826] syncProxyRules took 28.361873ms\nI0802 09:23:37.132007       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:37.191455       1 proxier.go:826] syncProxyRules took 60.057786ms\nI0802 09:23:37.192125       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:37.232217       1 proxier.go:826] syncProxyRules took 40.725154ms\nI0802 09:23:42.221257       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:42.249940       1 proxier.go:826] syncProxyRules took 29.155193ms\nI0802 09:23:46.533930       1 service.go:275] Service pods-9854/fooservice updated: 1 ports\nI0802 09:23:46.534426       1 service.go:390] Adding new service port \"pods-9854/fooservice\" at 100.68.243.160:8765/TCP\nI0802 09:23:46.534501       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:46.572015       1 proxier.go:826] syncProxyRules took 38.041981ms\nI0802 09:23:46.572500       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:46.600137       1 proxier.go:826] syncProxyRules took 28.08919ms\nI0802 09:23:52.230407       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:52.257884       1 proxier.go:826] syncProxyRules took 27.978045ms\nI0802 09:23:52.283869       1 service.go:275] Service proxy-4404/proxy-service-b6j8z updated: 0 ports\nI0802 09:23:52.284484       1 service.go:415] Removing service port \"proxy-4404/proxy-service-b6j8z:portname1\"\nI0802 09:23:52.284508       1 service.go:415] Removing service port \"proxy-4404/proxy-service-b6j8z:portname2\"\nI0802 09:23:52.284516       1 service.go:415] Removing service port \"proxy-4404/proxy-service-b6j8z:tlsportname1\"\nI0802 09:23:52.284523       1 service.go:415] Removing service port \"proxy-4404/proxy-service-b6j8z:tlsportname2\"\nI0802 09:23:52.284582       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:52.311797       1 proxier.go:826] syncProxyRules took 27.893271ms\nI0802 09:23:55.288344       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:55.314974       1 proxier.go:826] syncProxyRules took 27.130427ms\nI0802 09:23:55.445583       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:55.472017       1 proxier.go:826] syncProxyRules took 26.906267ms\nI0802 09:23:55.502193       1 service.go:275] Service pods-9854/fooservice updated: 0 ports\nI0802 09:23:56.472645       1 service.go:415] Removing service port \"pods-9854/fooservice\"\nI0802 09:23:56.472727       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:56.499866       1 proxier.go:826] syncProxyRules took 27.705688ms\nI0802 09:23:57.360599       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:57.388223       1 proxier.go:826] syncProxyRules took 28.179006ms\nI0802 09:23:58.401571       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-attacher updated: 1 ports\nI0802 09:23:58.402096       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-attacher:dummy\" at 100.67.215.140:12345/TCP\nI0802 09:23:58.402172       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:58.444891       1 proxier.go:826] syncProxyRules took 43.282386ms\nI0802 09:23:58.982593       1 service.go:275] Service ephemeral-9710-9555/csi-hostpathplugin updated: 1 ports\nI0802 09:23:59.445514       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpathplugin:dummy\" at 100.70.173.208:12345/TCP\nI0802 09:23:59.445621       1 proxier.go:871] Syncing iptables rules\nI0802 09:23:59.495092       1 proxier.go:826] syncProxyRules took 50.07407ms\nI0802 09:23:59.573556       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-provisioner updated: 1 ports\nI0802 09:23:59.966563       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-resizer updated: 1 ports\nI0802 09:24:00.401316       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-provisioner:dummy\" at 100.64.220.139:12345/TCP\nI0802 09:24:00.401348       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-resizer:dummy\" at 100.65.129.225:12345/TCP\nI0802 09:24:00.401446       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:00.421506       1 service.go:275] Service ephemeral-9710-9555/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:24:00.442200       1 proxier.go:826] syncProxyRules took 41.506642ms\nI0802 09:24:01.442991       1 service.go:390] Adding new service port \"ephemeral-9710-9555/csi-hostpath-snapshotter:dummy\" at 100.69.208.138:12345/TCP\nI0802 09:24:01.443123       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:01.472390       1 proxier.go:826] syncProxyRules took 29.948166ms\nI0802 09:24:01.603551       1 service.go:275] Service conntrack-3978/svc-udp updated: 0 ports\nI0802 09:24:02.473053       1 service.go:415] Removing service port \"conntrack-3978/svc-udp:udp\"\nI0802 09:24:02.473197       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:02.529198       1 proxier.go:826] syncProxyRules took 56.656911ms\nI0802 09:24:03.529992       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:03.576832       1 proxier.go:826] syncProxyRules took 47.513175ms\nI0802 09:24:04.864578       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:04.904818       1 proxier.go:826] syncProxyRules took 40.824862ms\nI0802 09:24:05.906241       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:05.953351       1 proxier.go:826] syncProxyRules took 48.385121ms\nI0802 09:24:07.920051       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:07.955329       1 proxier.go:826] syncProxyRules took 35.820181ms\nI0802 09:24:09.226877       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:09.396000       1 proxier.go:826] syncProxyRules took 169.955153ms\nI0802 09:24:09.565745       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:09.627834       1 proxier.go:826] syncProxyRules took 62.988836ms\nI0802 09:24:12.049712       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:12.113214       1 proxier.go:826] syncProxyRules took 64.143297ms\nI0802 09:24:12.137667       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:12.182867       1 proxier.go:826] syncProxyRules took 45.835624ms\nI0802 09:24:13.183671       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:13.215842       1 proxier.go:826] syncProxyRules took 32.73496ms\nI0802 09:24:14.483502       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:14.511583       1 proxier.go:826] syncProxyRules took 28.594101ms\nI0802 09:24:15.512991       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:15.545874       1 proxier.go:826] syncProxyRules took 34.138702ms\nI0802 09:24:16.680103       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-attacher updated: 1 ports\nI0802 09:24:16.680706       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-attacher:dummy\" at 100.67.115.219:12345/TCP\nI0802 09:24:16.680794       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:16.708456       1 proxier.go:826] syncProxyRules took 28.318809ms\nI0802 09:24:17.262019       1 service.go:275] Service volume-expand-9751-6514/csi-hostpathplugin updated: 1 ports\nI0802 09:24:17.262680       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpathplugin:dummy\" at 100.71.58.236:12345/TCP\nI0802 09:24:17.262767       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:17.302009       1 proxier.go:826] syncProxyRules took 39.949308ms\nI0802 09:24:17.648889       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-provisioner updated: 1 ports\nI0802 09:24:18.040197       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-resizer updated: 1 ports\nI0802 09:24:18.053018       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-provisioner:dummy\" at 100.67.64.219:12345/TCP\nI0802 09:24:18.053046       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-resizer:dummy\" at 100.65.122.98:12345/TCP\nI0802 09:24:18.053144       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:18.105027       1 proxier.go:826] syncProxyRules took 52.484009ms\nI0802 09:24:18.425551       1 service.go:275] Service volume-expand-9751-6514/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:24:19.105803       1 service.go:390] Adding new service port \"volume-expand-9751-6514/csi-hostpath-snapshotter:dummy\" at 100.71.86.20:12345/TCP\nI0802 09:24:19.105909       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:19.157553       1 proxier.go:826] syncProxyRules took 52.364187ms\nI0802 09:24:20.158406       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:20.204043       1 proxier.go:826] syncProxyRules took 46.360039ms\nI0802 09:24:21.205101       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:21.235539       1 proxier.go:826] syncProxyRules took 31.153419ms\nI0802 09:24:22.236530       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:22.277471       1 proxier.go:826] syncProxyRules took 41.690948ms\nI0802 09:24:23.284598       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:23.345744       1 proxier.go:826] syncProxyRules took 68.135895ms\nI0802 09:24:24.346793       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:24.376221       1 proxier.go:826] syncProxyRules took 30.109661ms\nI0802 09:24:27.643625       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:27.685871       1 proxier.go:826] syncProxyRules took 42.895532ms\nI0802 09:24:27.686592       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:27.734087       1 proxier.go:826] syncProxyRules took 48.179494ms\nI0802 09:24:28.712982       1 service.go:275] Service provisioning-2278-17/csi-hostpath-attacher updated: 0 ports\nI0802 09:24:28.713605       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpath-attacher:dummy\"\nI0802 09:24:28.713714       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:28.751096       1 proxier.go:826] syncProxyRules took 38.079143ms\nI0802 09:24:29.332584       1 service.go:275] Service provisioning-2278-17/csi-hostpathplugin updated: 0 ports\nI0802 09:24:29.732233       1 service.go:275] Service provisioning-2278-17/csi-hostpath-provisioner updated: 0 ports\nI0802 09:24:29.732997       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpathplugin:dummy\"\nI0802 09:24:29.733021       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpath-provisioner:dummy\"\nI0802 09:24:29.733131       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:29.781252       1 proxier.go:826] syncProxyRules took 48.986048ms\nI0802 09:24:30.132830       1 service.go:275] Service provisioning-2278-17/csi-hostpath-resizer updated: 0 ports\nI0802 09:24:30.528599       1 service.go:275] Service provisioning-2278-17/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:24:30.781990       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpath-resizer:dummy\"\nI0802 09:24:30.782022       1 service.go:415] Removing service port \"provisioning-2278-17/csi-hostpath-snapshotter:dummy\"\nI0802 09:24:30.782159       1 proxier.go:871] Syncing iptables rules\nI0802 09:24:30.810446       1 proxier.go:826] syncProxyRules took 29.101911ms\nI0802 09:24:41.710421       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0802 09:24:41.710782       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0802 09:25:53.880198       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:53.912591       1 proxier.go:826] syncProxyRules took 33.009441ms\nI0802 09:25:53.913248       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:53.945718       1 proxier.go:826] syncProxyRules took 33.093432ms\nI0802 09:25:55.182042       1 service.go:275] Service services-1470/up-down-1 updated: 1 ports\nI0802 09:25:55.182808       1 service.go:390] Adding new service port \"services-1470/up-down-1\" at 100.70.4.215:80/TCP\nI0802 09:25:55.182951       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:55.215605       1 proxier.go:826] syncProxyRules took 33.523316ms\nI0802 09:25:56.216389       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:56.245561       1 proxier.go:826] syncProxyRules took 29.81058ms\nI0802 09:25:56.776852       1 service.go:275] Service kubectl-4027/agnhost-replica updated: 1 ports\nI0802 09:25:57.246273       1 service.go:390] Adding new service port \"kubectl-4027/agnhost-replica\" at 100.71.189.13:6379/TCP\nI0802 09:25:57.246406       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:57.279402       1 proxier.go:826] syncProxyRules took 33.700647ms\nI0802 09:25:58.280197       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:58.322197       1 proxier.go:826] syncProxyRules took 42.599423ms\nI0802 09:25:58.556645       1 service.go:275] Service kubectl-4027/agnhost-primary updated: 1 ports\nI0802 09:25:59.304672       1 service.go:390] Adding new service port \"kubectl-4027/agnhost-primary\" at 100.66.15.222:6379/TCP\nI0802 09:25:59.304825       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:59.353320       1 proxier.go:826] syncProxyRules took 49.173106ms\nI0802 09:25:59.887571       1 proxier.go:871] Syncing iptables rules\nI0802 09:25:59.916820       1 proxier.go:826] syncProxyRules took 29.865972ms\nI0802 09:26:00.351044       1 service.go:275] Service kubectl-4027/frontend updated: 1 ports\nI0802 09:26:00.917522       1 service.go:390] Adding new service port \"kubectl-4027/frontend\" at 100.67.213.81:80/TCP\nI0802 09:26:00.917603       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:00.945864       1 proxier.go:826] syncProxyRules took 28.902953ms\nI0802 09:26:01.282918       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-attacher updated: 1 ports\nI0802 09:26:01.873479       1 service.go:275] Service volume-expand-5125-7028/csi-hostpathplugin updated: 1 ports\nI0802 09:26:01.881955       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-attacher:dummy\" at 100.68.29.179:12345/TCP\nI0802 09:26:01.881991       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpathplugin:dummy\" at 100.68.226.200:12345/TCP\nI0802 09:26:01.882085       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:01.930813       1 proxier.go:826] syncProxyRules took 49.030724ms\nI0802 09:26:02.260534       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-provisioner updated: 1 ports\nI0802 09:26:02.658908       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-resizer updated: 1 ports\nI0802 09:26:02.931668       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-resizer:dummy\" at 100.66.76.190:12345/TCP\nI0802 09:26:02.931699       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-provisioner:dummy\" at 100.66.131.101:12345/TCP\nI0802 09:26:02.931807       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:02.986518       1 proxier.go:826] syncProxyRules took 55.557302ms\nI0802 09:26:03.047659       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:26:03.987225       1 service.go:390] Adding new service port \"volume-expand-5125-7028/csi-hostpath-snapshotter:dummy\" at 100.69.27.176:12345/TCP\nI0802 09:26:03.987404       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:04.036550       1 proxier.go:826] syncProxyRules took 49.960626ms\nI0802 09:26:04.959920       1 service.go:275] Service services-1470/up-down-2 updated: 1 ports\nI0802 09:26:04.960708       1 service.go:390] Adding new service port \"services-1470/up-down-2\" at 100.70.71.58:80/TCP\nI0802 09:26:04.960846       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:05.007259       1 proxier.go:826] syncProxyRules took 47.298991ms\nI0802 09:26:05.900995       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:05.988670       1 service.go:275] Service webhook-9862/e2e-test-webhook updated: 1 ports\nI0802 09:26:05.996988       1 proxier.go:826] syncProxyRules took 96.969396ms\nI0802 09:26:06.892732       1 service.go:390] Adding new service port \"webhook-9862/e2e-test-webhook\" at 100.66.227.222:8443/TCP\nI0802 09:26:06.892890       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:06.980878       1 proxier.go:826] syncProxyRules took 89.040209ms\nI0802 09:26:07.981798       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:08.014025       1 proxier.go:826] syncProxyRules took 33.005732ms\nI0802 09:26:09.014999       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:09.044446       1 proxier.go:826] syncProxyRules took 30.251272ms\nI0802 09:26:10.045833       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:10.303396       1 proxier.go:826] syncProxyRules took 258.790324ms\nI0802 09:26:10.883119       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:10.926368       1 proxier.go:826] syncProxyRules took 43.947843ms\nI0802 09:26:11.317086       1 service.go:275] Service webhook-9862/e2e-test-webhook updated: 0 ports\nI0802 09:26:11.927345       1 service.go:415] Removing service port \"webhook-9862/e2e-test-webhook\"\nI0802 09:26:11.927519       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:11.974707       1 proxier.go:826] syncProxyRules took 48.191708ms\nI0802 09:26:12.975712       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:13.022417       1 proxier.go:826] syncProxyRules took 47.408398ms\nI0802 09:26:14.024103       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:14.056026       1 proxier.go:826] syncProxyRules took 33.461195ms\nI0802 09:26:14.981722       1 service.go:275] Service kubectl-4027/agnhost-replica updated: 0 ports\nI0802 09:26:14.982570       1 service.go:415] Removing service port \"kubectl-4027/agnhost-replica\"\nI0802 09:26:14.982697       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:15.014758       1 proxier.go:826] syncProxyRules took 32.994597ms\nI0802 09:26:15.730941       1 service.go:275] Service services-5870/service-headless-toggled updated: 1 ports\nI0802 09:26:15.866874       1 service.go:275] Service kubectl-4027/agnhost-primary updated: 0 ports\nI0802 09:26:15.897703       1 service.go:390] Adding new service port \"services-5870/service-headless-toggled\" at 100.66.82.148:80/TCP\nI0802 09:26:15.897733       1 service.go:415] Removing service port \"kubectl-4027/agnhost-primary\"\nI0802 09:26:15.897879       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:15.927573       1 proxier.go:826] syncProxyRules took 30.518863ms\nI0802 09:26:16.788724       1 service.go:275] Service kubectl-4027/frontend updated: 0 ports\nI0802 09:26:16.931119       1 service.go:415] Removing service port \"kubectl-4027/frontend\"\nI0802 09:26:16.931267       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:16.974200       1 proxier.go:826] syncProxyRules took 43.848185ms\nI0802 09:26:18.173780       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:18.217696       1 proxier.go:826] syncProxyRules took 44.7972ms\nI0802 09:26:18.891790       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:18.926536       1 proxier.go:826] syncProxyRules took 35.743332ms\nI0802 09:26:31.260442       1 service.go:275] Service deployment-4314/test-rolling-update-with-lb updated: 0 ports\nI0802 09:26:31.261708       1 service.go:415] Removing service port \"deployment-4314/test-rolling-update-with-lb\"\nI0802 09:26:31.261858       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:31.298532       1 service_health.go:83] Closing healthcheck \"deployment-4314/test-rolling-update-with-lb\" on port 31777\nI0802 09:26:31.298630       1 proxier.go:826] syncProxyRules took 38.1292ms\nI0802 09:26:36.838865       1 service.go:275] Service services-5870/service-headless-toggled updated: 0 ports\nI0802 09:26:36.839593       1 service.go:415] Removing service port \"services-5870/service-headless-toggled\"\nI0802 09:26:36.839715       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:36.869054       1 proxier.go:826] syncProxyRules took 30.143817ms\nI0802 09:26:39.415557       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:39.443952       1 proxier.go:826] syncProxyRules took 29.013532ms\nI0802 09:26:40.417644       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:40.447512       1 proxier.go:826] syncProxyRules took 30.522465ms\nI0802 09:26:42.878614       1 service.go:275] Service webhook-3975/e2e-test-webhook updated: 1 ports\nI0802 09:26:42.879218       1 service.go:390] Adding new service port \"webhook-3975/e2e-test-webhook\" at 100.69.160.98:8443/TCP\nI0802 09:26:42.879332       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:42.919461       1 proxier.go:826] syncProxyRules took 40.808451ms\nI0802 09:26:42.920273       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:42.952348       1 proxier.go:826] syncProxyRules took 32.799269ms\nI0802 09:26:44.021672       1 service.go:275] Service services-5870/service-headless-toggled updated: 1 ports\nI0802 09:26:44.022367       1 service.go:390] Adding new service port \"services-5870/service-headless-toggled\" at 100.66.82.148:80/TCP\nI0802 09:26:44.022560       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:44.068127       1 proxier.go:826] syncProxyRules took 46.357729ms\nI0802 09:26:46.294278       1 service.go:275] Service services-1470/up-down-1 updated: 0 ports\nI0802 09:26:46.295451       1 service.go:415] Removing service port \"services-1470/up-down-1\"\nI0802 09:26:46.295621       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:46.378418       1 proxier.go:826] syncProxyRules took 83.805532ms\nI0802 09:26:46.902692       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:47.075757       1 proxier.go:826] syncProxyRules took 173.806077ms\nI0802 09:26:47.424544       1 service.go:275] Service webhook-3975/e2e-test-webhook updated: 0 ports\nI0802 09:26:47.425351       1 service.go:415] Removing service port \"webhook-3975/e2e-test-webhook\"\nI0802 09:26:47.425463       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:47.473434       1 proxier.go:826] syncProxyRules took 48.846195ms\nI0802 09:26:48.474212       1 proxier.go:871] Syncing iptables rules\nI0802 09:26:48.503521       1 proxier.go:826] syncProxyRules took 29.973037ms\nI0802 09:27:03.383501       1 service.go:275] Service services-1470/up-down-3 updated: 1 ports\nI0802 09:27:03.384222       1 service.go:390] Adding new service port \"services-1470/up-down-3\" at 100.64.1.233:80/TCP\nI0802 09:27:03.384350       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:03.440420       1 proxier.go:826] syncProxyRules took 56.878903ms\nI0802 09:27:03.441136       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:03.488493       1 proxier.go:826] syncProxyRules took 48.020645ms\nI0802 09:27:05.565441       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:05.595155       1 proxier.go:826] syncProxyRules took 30.355863ms\nI0802 09:27:05.746247       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:05.777076       1 proxier.go:826] syncProxyRules took 32.221068ms\nI0802 09:27:06.251703       1 service.go:275] Service services-5870/service-headless-toggled updated: 0 ports\nI0802 09:27:06.777830       1 service.go:415] Removing service port \"services-5870/service-headless-toggled\"\nI0802 09:27:06.778084       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:06.807916       1 proxier.go:826] syncProxyRules took 30.702874ms\nI0802 09:27:08.947018       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-attacher updated: 1 ports\nI0802 09:27:08.947687       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-attacher:dummy\" at 100.64.190.82:12345/TCP\nI0802 09:27:08.947864       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:08.976489       1 proxier.go:826] syncProxyRules took 29.418738ms\nI0802 09:27:08.978310       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:09.009099       1 proxier.go:826] syncProxyRules took 31.694598ms\nI0802 09:27:09.528697       1 service.go:275] Service provisioning-2971-8427/csi-hostpathplugin updated: 1 ports\nI0802 09:27:09.912680       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-provisioner updated: 1 ports\nI0802 09:27:10.009874       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpathplugin:dummy\" at 100.65.218.93:12345/TCP\nI0802 09:27:10.009913       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-provisioner:dummy\" at 100.66.137.241:12345/TCP\nI0802 09:27:10.010031       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:10.039901       1 proxier.go:826] syncProxyRules took 30.643429ms\nI0802 09:27:10.298688       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-resizer updated: 1 ports\nI0802 09:27:10.688313       1 service.go:275] Service provisioning-2971-8427/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:27:11.040601       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-snapshotter:dummy\" at 100.68.17.221:12345/TCP\nI0802 09:27:11.040633       1 service.go:390] Adding new service port \"provisioning-2971-8427/csi-hostpath-resizer:dummy\" at 100.67.174.173:12345/TCP\nI0802 09:27:11.040739       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:11.072034       1 proxier.go:826] syncProxyRules took 31.992618ms\nI0802 09:27:17.830333       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:17.864976       1 proxier.go:826] syncProxyRules took 35.25816ms\nI0802 09:27:18.827302       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:18.862562       1 proxier.go:826] syncProxyRules took 36.357147ms\nI0802 09:27:19.829497       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-attacher updated: 0 ports\nI0802 09:27:19.830153       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-attacher:dummy\"\nI0802 09:27:19.830330       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:19.863575       1 proxier.go:826] syncProxyRules took 34.044107ms\nI0802 09:27:20.300226       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:20.332729       1 proxier.go:826] syncProxyRules took 33.084811ms\nI0802 09:27:20.415704       1 service.go:275] Service volume-expand-5125-7028/csi-hostpathplugin updated: 0 ports\nI0802 09:27:20.807151       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-provisioner updated: 0 ports\nI0802 09:27:21.198310       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-resizer updated: 0 ports\nI0802 09:27:21.199036       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpathplugin:dummy\"\nI0802 09:27:21.199061       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-provisioner:dummy\"\nI0802 09:27:21.199070       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-resizer:dummy\"\nI0802 09:27:21.199255       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:21.230706       1 proxier.go:826] syncProxyRules took 32.357685ms\nI0802 09:27:21.589497       1 service.go:275] Service volume-expand-5125-7028/csi-hostpath-snapshotter updated: 0 ports\nI0802 09:27:22.231432       1 service.go:415] Removing service port \"volume-expand-5125-7028/csi-hostpath-snapshotter:dummy\"\nI0802 09:27:22.231664       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:22.266026       1 proxier.go:826] syncProxyRules took 35.175778ms\nI0802 09:27:23.426901       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:23.457578       1 proxier.go:826] syncProxyRules took 31.407669ms\nI0802 09:27:23.703271       1 service.go:275] Service webhook-8390/e2e-test-webhook updated: 1 ports\nI0802 09:27:24.458350       1 service.go:390] Adding new service port \"webhook-8390/e2e-test-webhook\" at 100.66.116.29:8443/TCP\nI0802 09:27:24.458480       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:24.523883       1 proxier.go:826] syncProxyRules took 66.13275ms\nI0802 09:27:31.638170       1 service.go:275] Service dns-5822/dns-test-service-3 updated: 1 ports\nI0802 09:27:31.639315       1 service.go:390] Adding new service port \"dns-5822/dns-test-service-3:http\" at 100.65.38.43:80/TCP\nI0802 09:27:31.639438       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:31.701804       1 proxier.go:826] syncProxyRules took 63.590126ms\nI0802 09:27:33.135945       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:33.174951       1 service.go:275] Service services-1470/up-down-2 updated: 0 ports\nI0802 09:27:33.193259       1 proxier.go:826] syncProxyRules took 58.105317ms\nI0802 09:27:33.193959       1 service.go:415] Removing service port \"services-1470/up-down-2\"\nI0802 09:27:33.194107       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:33.198999       1 service.go:275] Service services-1470/up-down-3 updated: 0 ports\nI0802 09:27:33.262558       1 proxier.go:826] syncProxyRules took 69.246383ms\nI0802 09:27:34.263230       1 service.go:415] Removing service port \"services-1470/up-down-3\"\nI0802 09:27:34.263374       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:34.299519       1 proxier.go:826] syncProxyRules took 36.823219ms\nI0802 09:27:37.357470       1 service.go:275] Service dns-5822/dns-test-service-3 updated: 0 ports\nI0802 09:27:37.358269       1 service.go:415] Removing service port \"dns-5822/dns-test-service-3:http\"\nI0802 09:27:37.358584       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:37.399372       1 proxier.go:826] syncProxyRules took 41.678395ms\nI0802 09:27:38.401924       1 service.go:275] Service webhook-8390/e2e-test-webhook updated: 0 ports\nI0802 09:27:38.402708       1 service.go:415] Removing service port \"webhook-8390/e2e-test-webhook\"\nI0802 09:27:38.402820       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:38.437527       1 proxier.go:826] syncProxyRules took 35.531059ms\nI0802 09:27:38.438231       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:38.471337       1 proxier.go:826] syncProxyRules took 33.775147ms\nI0802 09:27:39.865087       1 service.go:275] Service webhook-4152/e2e-test-webhook updated: 1 ports\nI0802 09:27:39.866088       1 service.go:390] Adding new service port \"webhook-4152/e2e-test-webhook\" at 100.66.210.237:8443/TCP\nI0802 09:27:39.866220       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:39.908355       1 proxier.go:826] syncProxyRules took 43.2287ms\nI0802 09:27:40.909096       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:40.939765       1 proxier.go:826] syncProxyRules took 31.280121ms\nI0802 09:27:42.972726       1 service.go:275] Service webhook-4152/e2e-test-webhook updated: 0 ports\nI0802 09:27:42.973387       1 service.go:415] Removing service port \"webhook-4152/e2e-test-webhook\"\nI0802 09:27:42.973495       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:43.017386       1 proxier.go:826] syncProxyRules took 44.620043ms\nI0802 09:27:43.445014       1 proxier.go:871] Syncing iptables rules\nI0802 09:27:43.481171       1 proxier.go:826] syncProxyRules took 36.905475ms\nI0802 09:28:09.150334       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-attacher updated: 1 ports\nI0802 09:28:09.151137       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-attacher:dummy\" at 100.65.180.222:12345/TCP\nI0802 09:28:09.151251       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:09.181113       1 proxier.go:826] syncProxyRules took 30.738028ms\nI0802 09:28:09.182051       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:09.211182       1 proxier.go:826] syncProxyRules took 30.035559ms\nI0802 09:28:09.725576       1 service.go:275] Service provisioning-4508-9194/csi-hostpathplugin updated: 1 ports\nI0802 09:28:10.111237       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-provisioner updated: 1 ports\nI0802 09:28:10.211910       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-provisioner:dummy\" at 100.68.168.136:12345/TCP\nI0802 09:28:10.211944       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpathplugin:dummy\" at 100.65.224.153:12345/TCP\nI0802 09:28:10.212065       1 proxier.go:871] Syncing iptables rules\nI0802 09:28:10.244462       1 proxier.go:826] syncProxyRules took 33.123447ms\nI0802 09:28:10.500075       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-resizer updated: 1 ports\nI0802 09:28:10.884964       1 service.go:275] Service provisioning-4508-9194/csi-hostpath-snapshotter updated: 1 ports\nI0802 09:28:11.166158       1 service.go:390] Adding new service port \"provisioning-4508-9194/csi-hostpath-resizer:dummy\