This job view page is being replaced by Spyglass soon. Check out the new job view.
PRrifelpet: Support GSFS Terraform Managed Files
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-02 13:25
Elapsed44m33s
Revision33cc579050b8b86401f0057bbb387ad5a55678a5
Refs 12121

No Test Failures!


Error lines from build-log.txt

... skipping 577 lines ...
Operation completed over 1 objects/153.0 B.                                      
I1002 13:31:53.043461    4952 copy.go:30] cp /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops /logs/artifacts/1860ffc7-2384-11ec-82b1-3270677884c7/kops
I1002 13:31:53.235970    4952 up.go:43] Cleaning up any leaked resources from previous cluster
I1002 13:31:53.236103    4952 dumplogs.go:40] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-de872154ff-19973.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1002 13:31:53.255281   13541 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1002 13:31:53.255401   13541 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-de872154ff-19973.test-cncf-aws.k8s.io" not found
W1002 13:31:53.712243    4952 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1002 13:31:53.712382    4952 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-de872154ff-19973.test-cncf-aws.k8s.io --yes
I1002 13:31:53.732099   13550 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1002 13:31:53.732201   13550 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-de872154ff-19973.test-cncf-aws.k8s.io" not found
I1002 13:31:54.166192    4952 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/02 13:31:54 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1002 13:31:54.174498    4952 http.go:37] curl https://ip.jsb.workers.dev
I1002 13:31:54.234533    4952 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-de872154ff-19973.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.5 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210927 --channel=alpha --networking=canal --container-runtime=containerd --node-size=t3.large --admin-access 34.70.68.82/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-southeast-2a --master-size c5.large
I1002 13:31:54.253326   13561 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1002 13:31:54.253419   13561 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1002 13:31:54.276439   13561 create_cluster.go:838] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1002 13:31:54.738938   13561 new_cluster.go:1077]  Cloud Provider ID = aws
... skipping 39 lines ...

I1002 13:32:11.863743    4952 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-de872154ff-19973.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1002 13:32:11.882057   13579 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1002 13:32:11.882155   13579 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-de872154ff-19973.test-cncf-aws.k8s.io

W1002 13:32:13.447529   13579 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-de872154ff-19973.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:32:23.485613   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:32:33.517512   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:32:43.553786   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:32:53.599965   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:33:03.628505   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:33:13.663956   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:33:23.709310   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
W1002 13:33:33.741103   13579 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-de872154ff-19973.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:33:43.775684   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:33:53.803967   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:34:03.836301   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:34:13.866860   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:34:23.897096   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:34:33.941880   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:34:43.987891   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:34:54.018170   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:35:04.159558   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:35:14.189484   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
W1002 13:35:24.209538   13579 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-de872154ff-19973.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:35:34.240663   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:35:44.286021   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:35:54.316473   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:36:04.347156   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:36:14.412176   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1002 13:36:24.447809   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 14 lines ...
Pod	kube-system/canal-5h4w9							system-node-critical pod "canal-5h4w9" is pending
Pod	kube-system/canal-ddn47							system-node-critical pod "canal-ddn47" is pending
Pod	kube-system/coredns-5dc785954d-kt26c					system-cluster-critical pod "coredns-5dc785954d-kt26c" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-bbcmp				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-bbcmp" is pending
Pod	kube-system/kube-proxy-ip-172-20-46-238.ap-southeast-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-46-238.ap-southeast-2.compute.internal" is pending

Validation Failed
W1002 13:36:38.732841   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 14 lines ...
Pod	kube-system/canal-5h4w9					system-node-critical pod "canal-5h4w9" is pending
Pod	kube-system/canal-8br4g					system-node-critical pod "canal-8br4g" is pending
Pod	kube-system/canal-ddn47					system-node-critical pod "canal-ddn47" is pending
Pod	kube-system/coredns-5dc785954d-kt26c			system-cluster-critical pod "coredns-5dc785954d-kt26c" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-bbcmp		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-bbcmp" is pending

Validation Failed
W1002 13:36:51.849209   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 12 lines ...
Pod	kube-system/canal-5h4w9					system-node-critical pod "canal-5h4w9" is pending
Pod	kube-system/canal-8br4g					system-node-critical pod "canal-8br4g" is pending
Pod	kube-system/canal-ddn47					system-node-critical pod "canal-ddn47" is pending
Pod	kube-system/coredns-5dc785954d-kt26c			system-cluster-critical pod "coredns-5dc785954d-kt26c" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-bbcmp		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-bbcmp" is pending

Validation Failed
W1002 13:37:04.864396   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 6 lines ...
ip-172-20-49-155.ap-southeast-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME			MESSAGE
Pod	kube-system/canal-8br4g	system-node-critical pod "canal-8br4g" is pending

Validation Failed
W1002 13:37:17.897790   13579 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-southeast-2a	Master	c5.large	1	1	ap-southeast-2a
nodes-ap-southeast-2a	Node	t3.large	4	4	ap-southeast-2a

... skipping 937 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:01.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6214" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:02.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-6222" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:02.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7139" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:03.291: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63 lines ...
STEP: Creating a kubernetes client
Oct  2 13:40:00.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
W1002 13:40:00.943889   14266 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  2 13:40:00.944: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:03.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1040" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:04.301: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
STEP: Destroying namespace "services-2368" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:04.976: INFO: Only supported for providers [vsphere] (not aws)
... skipping 90 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:05.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-4718" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:05.403: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
Oct  2 13:40:00.848: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-331b47ac-8efb-4d63-879e-861c5cce5f90
STEP: Creating a pod to test consume configMaps
Oct  2 13:40:01.607: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08" in namespace "projected-8780" to be "Succeeded or Failed"
Oct  2 13:40:01.796: INFO: Pod "pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08": Phase="Pending", Reason="", readiness=false. Elapsed: 188.986468ms
Oct  2 13:40:03.987: INFO: Pod "pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379646076s
Oct  2 13:40:06.178: INFO: Pod "pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570184157s
Oct  2 13:40:08.369: INFO: Pod "pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.761335288s
STEP: Saw pod success
Oct  2 13:40:08.369: INFO: Pod "pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08" satisfied condition "Succeeded or Failed"
Oct  2 13:40:08.564: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:40:08.968: INFO: Waiting for pod pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08 to disappear
Oct  2 13:40:09.158: INFO: Pod pod-projected-configmaps-99acf782-0d68-45ec-906f-4b6f2d05bc08 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.643 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:09.740: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
W1002 13:40:02.163823   14288 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  2 13:40:02.163: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Oct  2 13:40:02.740: INFO: Waiting up to 5m0s for pod "client-containers-2ee5565a-03b3-4c44-896a-7262523fd141" in namespace "containers-4461" to be "Succeeded or Failed"
Oct  2 13:40:02.930: INFO: Pod "client-containers-2ee5565a-03b3-4c44-896a-7262523fd141": Phase="Pending", Reason="", readiness=false. Elapsed: 189.32017ms
Oct  2 13:40:05.119: INFO: Pod "client-containers-2ee5565a-03b3-4c44-896a-7262523fd141": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379024561s
Oct  2 13:40:07.310: INFO: Pod "client-containers-2ee5565a-03b3-4c44-896a-7262523fd141": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569229508s
Oct  2 13:40:09.501: INFO: Pod "client-containers-2ee5565a-03b3-4c44-896a-7262523fd141": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760083898s
Oct  2 13:40:11.691: INFO: Pod "client-containers-2ee5565a-03b3-4c44-896a-7262523fd141": Phase="Pending", Reason="", readiness=false. Elapsed: 8.950415211s
Oct  2 13:40:13.881: INFO: Pod "client-containers-2ee5565a-03b3-4c44-896a-7262523fd141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.140937583s
STEP: Saw pod success
Oct  2 13:40:13.881: INFO: Pod "client-containers-2ee5565a-03b3-4c44-896a-7262523fd141" satisfied condition "Succeeded or Failed"
Oct  2 13:40:14.071: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod client-containers-2ee5565a-03b3-4c44-896a-7262523fd141 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:40:14.506: INFO: Waiting for pod client-containers-2ee5565a-03b3-4c44-896a-7262523fd141 to disappear
Oct  2 13:40:14.697: INFO: Pod client-containers-2ee5565a-03b3-4c44-896a-7262523fd141 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.037 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:15.293: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 13:40:05.501: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b" in namespace "downward-api-1866" to be "Succeeded or Failed"
Oct  2 13:40:05.700: INFO: Pod "downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b": Phase="Pending", Reason="", readiness=false. Elapsed: 198.887478ms
Oct  2 13:40:07.893: INFO: Pod "downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391375143s
Oct  2 13:40:10.087: INFO: Pod "downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585608111s
Oct  2 13:40:12.298: INFO: Pod "downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.796384916s
Oct  2 13:40:14.497: INFO: Pod "downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.995963215s
STEP: Saw pod success
Oct  2 13:40:14.498: INFO: Pod "downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b" satisfied condition "Succeeded or Failed"
Oct  2 13:40:14.689: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b container client-container: <nil>
STEP: delete the pod
Oct  2 13:40:15.152: INFO: Waiting for pod downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b to disappear
Oct  2 13:40:15.345: INFO: Pod downwardapi-volume-95179d8e-b310-4d4d-916d-9d1bfd40510b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.392 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:15.768: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 54 lines ...
• [SLOW TEST:16.308 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:40:00.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
W1002 13:40:00.905566   14331 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  2 13:40:00.905: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:17.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8668" for this suite.


• [SLOW TEST:18.311 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 70 lines ...
• [SLOW TEST:22.104 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:22.338: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 182 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct  2 13:40:16.769: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  2 13:40:16.769: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-2twj
STEP: Creating a pod to test subpath
Oct  2 13:40:16.966: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-2twj" in namespace "provisioning-3860" to be "Succeeded or Failed"
Oct  2 13:40:17.159: INFO: Pod "pod-subpath-test-inlinevolume-2twj": Phase="Pending", Reason="", readiness=false. Elapsed: 192.526105ms
Oct  2 13:40:19.353: INFO: Pod "pod-subpath-test-inlinevolume-2twj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38644553s
Oct  2 13:40:21.547: INFO: Pod "pod-subpath-test-inlinevolume-2twj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.580289282s
STEP: Saw pod success
Oct  2 13:40:21.547: INFO: Pod "pod-subpath-test-inlinevolume-2twj" satisfied condition "Succeeded or Failed"
Oct  2 13:40:21.739: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-2twj container test-container-volume-inlinevolume-2twj: <nil>
STEP: delete the pod
Oct  2 13:40:22.654: INFO: Waiting for pod pod-subpath-test-inlinevolume-2twj to disappear
Oct  2 13:40:22.847: INFO: Pod pod-subpath-test-inlinevolume-2twj no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-2twj
Oct  2 13:40:22.847: INFO: Deleting pod "pod-subpath-test-inlinevolume-2twj" in namespace "provisioning-3860"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":14,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:23.907: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 160 lines ...
• [SLOW TEST:27.845 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:40:20.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  2 13:40:21.428: INFO: Waiting up to 5m0s for pod "pod-043a3688-e0ee-4d81-88ca-4defc9fca068" in namespace "emptydir-4927" to be "Succeeded or Failed"
Oct  2 13:40:21.618: INFO: Pod "pod-043a3688-e0ee-4d81-88ca-4defc9fca068": Phase="Pending", Reason="", readiness=false. Elapsed: 190.300891ms
Oct  2 13:40:23.809: INFO: Pod "pod-043a3688-e0ee-4d81-88ca-4defc9fca068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381145094s
Oct  2 13:40:26.000: INFO: Pod "pod-043a3688-e0ee-4d81-88ca-4defc9fca068": Phase="Pending", Reason="", readiness=false. Elapsed: 4.57153551s
Oct  2 13:40:28.190: INFO: Pod "pod-043a3688-e0ee-4d81-88ca-4defc9fca068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.762130181s
STEP: Saw pod success
Oct  2 13:40:28.190: INFO: Pod "pod-043a3688-e0ee-4d81-88ca-4defc9fca068" satisfied condition "Succeeded or Failed"
Oct  2 13:40:28.380: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-043a3688-e0ee-4d81-88ca-4defc9fca068 container test-container: <nil>
STEP: delete the pod
Oct  2 13:40:28.770: INFO: Waiting for pod pod-043a3688-e0ee-4d81-88ca-4defc9fca068 to disappear
Oct  2 13:40:28.962: INFO: Pod pod-043a3688-e0ee-4d81-88ca-4defc9fca068 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.076 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:29.380: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
• [SLOW TEST:29.724 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:33.793: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:34.507: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
Oct  2 13:40:02.457: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-6360gls8v
STEP: creating a claim
Oct  2 13:40:02.650: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Oct  2 13:40:03.039: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  2 13:40:03.434: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:05.823: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:07.821: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:09.848: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:11.824: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:13.822: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:15.822: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:17.821: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:19.823: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:21.822: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:23.822: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:25.821: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:27.822: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:29.824: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:31.821: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:33.824: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6360gls8v",
  	... // 2 identical fields
  }

Oct  2 13:40:34.211: INFO: Error updating pvc aws5t8hf: PersistentVolumeClaim "aws5t8hf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:35.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5516" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:36.142: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 17 lines ...
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:40:34.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct  2 13:40:35.683: INFO: found topology map[topology.kubernetes.io/zone:ap-southeast-2a]
Oct  2 13:40:35.683: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct  2 13:40:35.683: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:42.308: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:43.370: INFO: Only supported for providers [vsphere] (not aws)
... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:45.979: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 123 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:49.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6627" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":2,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
Oct  2 13:40:40.006: INFO: PersistentVolumeClaim pvc-2lgzk found and phase=Bound (4.572865211s)
Oct  2 13:40:40.006: INFO: Waiting up to 3m0s for PersistentVolume nfs-l648v to have phase Bound
Oct  2 13:40:40.199: INFO: PersistentVolume nfs-l648v found and phase=Bound (193.123315ms)
STEP: Checking pod has write access to PersistentVolume
Oct  2 13:40:40.581: INFO: Creating nfs test pod
Oct  2 13:40:40.772: INFO: Pod should terminate with exitcode 0 (success)
Oct  2 13:40:40.772: INFO: Waiting up to 5m0s for pod "pvc-tester-jm524" in namespace "pv-6495" to be "Succeeded or Failed"
Oct  2 13:40:40.962: INFO: Pod "pvc-tester-jm524": Phase="Pending", Reason="", readiness=false. Elapsed: 189.92805ms
Oct  2 13:40:43.153: INFO: Pod "pvc-tester-jm524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.381080972s
STEP: Saw pod success
Oct  2 13:40:43.154: INFO: Pod "pvc-tester-jm524" satisfied condition "Succeeded or Failed"
Oct  2 13:40:43.154: INFO: Pod pvc-tester-jm524 succeeded 
Oct  2 13:40:43.154: INFO: Deleting pod "pvc-tester-jm524" in namespace "pv-6495"
Oct  2 13:40:43.358: INFO: Wait up to 5m0s for pod "pvc-tester-jm524" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  2 13:40:43.548: INFO: Deleting PVC pvc-2lgzk to trigger reclamation of PV nfs-l648v
Oct  2 13:40:43.548: INFO: Deleting PersistentVolumeClaim "pvc-2lgzk"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":2,"skipped":2,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:40:32.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
• [SLOW TEST:21.419 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:53.547: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
Oct  2 13:40:04.253: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8641w8572
STEP: creating a claim
Oct  2 13:40:04.445: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-ljw6
STEP: Creating a pod to test subpath
Oct  2 13:40:05.018: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ljw6" in namespace "provisioning-8641" to be "Succeeded or Failed"
Oct  2 13:40:05.207: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 189.688522ms
Oct  2 13:40:07.399: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381811892s
Oct  2 13:40:09.590: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572547987s
Oct  2 13:40:11.781: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762920469s
Oct  2 13:40:13.972: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953847316s
Oct  2 13:40:16.162: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.144143463s
... skipping 2 lines ...
Oct  2 13:40:22.735: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.71689535s
Oct  2 13:40:24.926: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.908224675s
Oct  2 13:40:27.116: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.098363363s
Oct  2 13:40:29.308: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.29056491s
Oct  2 13:40:31.498: INFO: Pod "pod-subpath-test-dynamicpv-ljw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.480287571s
STEP: Saw pod success
Oct  2 13:40:31.498: INFO: Pod "pod-subpath-test-dynamicpv-ljw6" satisfied condition "Succeeded or Failed"
Oct  2 13:40:31.689: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-ljw6 container test-container-volume-dynamicpv-ljw6: <nil>
STEP: delete the pod
Oct  2 13:40:32.075: INFO: Waiting for pod pod-subpath-test-dynamicpv-ljw6 to disappear
Oct  2 13:40:32.264: INFO: Pod pod-subpath-test-dynamicpv-ljw6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ljw6
Oct  2 13:40:32.265: INFO: Deleting pod "pod-subpath-test-dynamicpv-ljw6" in namespace "provisioning-8641"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":3,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:55.345: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:40:26.184: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Oct  2 13:40:38.263: INFO: PersistentVolumeClaim pvc-njtbw found but phase is Pending instead of Bound.
Oct  2 13:40:40.456: INFO: PersistentVolumeClaim pvc-njtbw found and phase=Bound (8.950906688s)
Oct  2 13:40:40.456: INFO: Waiting up to 3m0s for PersistentVolume local-flsdg to have phase Bound
Oct  2 13:40:40.650: INFO: PersistentVolume local-flsdg found and phase=Bound (194.746568ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-wkft
STEP: Creating a pod to test exec-volume-test
Oct  2 13:40:41.222: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-wkft" in namespace "volume-1099" to be "Succeeded or Failed"
Oct  2 13:40:41.414: INFO: Pod "exec-volume-test-preprovisionedpv-wkft": Phase="Pending", Reason="", readiness=false. Elapsed: 192.154749ms
Oct  2 13:40:43.605: INFO: Pod "exec-volume-test-preprovisionedpv-wkft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382779318s
Oct  2 13:40:45.796: INFO: Pod "exec-volume-test-preprovisionedpv-wkft": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573961048s
Oct  2 13:40:47.986: INFO: Pod "exec-volume-test-preprovisionedpv-wkft": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764027982s
Oct  2 13:40:50.176: INFO: Pod "exec-volume-test-preprovisionedpv-wkft": Phase="Running", Reason="", readiness=true. Elapsed: 8.954395531s
Oct  2 13:40:52.367: INFO: Pod "exec-volume-test-preprovisionedpv-wkft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.144535581s
STEP: Saw pod success
Oct  2 13:40:52.367: INFO: Pod "exec-volume-test-preprovisionedpv-wkft" satisfied condition "Succeeded or Failed"
Oct  2 13:40:52.556: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod exec-volume-test-preprovisionedpv-wkft container exec-container-preprovisionedpv-wkft: <nil>
STEP: delete the pod
Oct  2 13:40:52.948: INFO: Waiting for pod exec-volume-test-preprovisionedpv-wkft to disappear
Oct  2 13:40:53.139: INFO: Pod exec-volume-test-preprovisionedpv-wkft no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-wkft
Oct  2 13:40:53.139: INFO: Deleting pod "exec-volume-test-preprovisionedpv-wkft" in namespace "volume-1099"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":0,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:56.803: INFO: Only supported for providers [gce gke] (not aws)
... skipping 34 lines ...
Oct  2 13:40:01.602: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6483vb78k
STEP: creating a claim
Oct  2 13:40:01.795: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-bcl5
STEP: Creating a pod to test subpath
Oct  2 13:40:02.389: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bcl5" in namespace "provisioning-6483" to be "Succeeded or Failed"
Oct  2 13:40:02.582: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 192.485071ms
Oct  2 13:40:04.776: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386576274s
Oct  2 13:40:06.969: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.57960994s
Oct  2 13:40:09.193: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.803986373s
Oct  2 13:40:11.387: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.997440289s
Oct  2 13:40:13.580: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.190571978s
... skipping 2 lines ...
Oct  2 13:40:20.165: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.77595767s
Oct  2 13:40:22.359: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.970214714s
Oct  2 13:40:24.572: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.1826866s
Oct  2 13:40:26.765: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.37627043s
Oct  2 13:40:28.963: INFO: Pod "pod-subpath-test-dynamicpv-bcl5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.57362726s
STEP: Saw pod success
Oct  2 13:40:28.963: INFO: Pod "pod-subpath-test-dynamicpv-bcl5" satisfied condition "Succeeded or Failed"
Oct  2 13:40:29.156: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-bcl5 container test-container-subpath-dynamicpv-bcl5: <nil>
STEP: delete the pod
Oct  2 13:40:29.568: INFO: Waiting for pod pod-subpath-test-dynamicpv-bcl5 to disappear
Oct  2 13:40:29.760: INFO: Pod pod-subpath-test-dynamicpv-bcl5 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-bcl5
Oct  2 13:40:29.760: INFO: Deleting pod "pod-subpath-test-dynamicpv-bcl5" in namespace "provisioning-6483"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:40:28.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:30.860 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:40:58.935: INFO: Only supported for providers [gce gke] (not aws)
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:40:59.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1154" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:00.281: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 92 lines ...
• [SLOW TEST:5.055 seconds]
[sig-network] Netpol API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":7,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:05.411: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 65 lines ...
Oct  2 13:40:52.509: INFO: PersistentVolumeClaim pvc-mcqh7 found but phase is Pending instead of Bound.
Oct  2 13:40:54.705: INFO: PersistentVolumeClaim pvc-mcqh7 found and phase=Bound (11.159027341s)
Oct  2 13:40:54.705: INFO: Waiting up to 3m0s for PersistentVolume local-k54z8 to have phase Bound
Oct  2 13:40:54.897: INFO: PersistentVolume local-k54z8 found and phase=Bound (191.871582ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wzmf
STEP: Creating a pod to test subpath
Oct  2 13:40:55.477: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wzmf" in namespace "provisioning-4648" to be "Succeeded or Failed"
Oct  2 13:40:55.669: INFO: Pod "pod-subpath-test-preprovisionedpv-wzmf": Phase="Pending", Reason="", readiness=false. Elapsed: 192.301265ms
Oct  2 13:40:57.863: INFO: Pod "pod-subpath-test-preprovisionedpv-wzmf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386443291s
Oct  2 13:41:00.056: INFO: Pod "pod-subpath-test-preprovisionedpv-wzmf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579596566s
Oct  2 13:41:02.250: INFO: Pod "pod-subpath-test-preprovisionedpv-wzmf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.772952027s
STEP: Saw pod success
Oct  2 13:41:02.250: INFO: Pod "pod-subpath-test-preprovisionedpv-wzmf" satisfied condition "Succeeded or Failed"
Oct  2 13:41:02.445: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-wzmf container test-container-volume-preprovisionedpv-wzmf: <nil>
STEP: delete the pod
Oct  2 13:41:02.885: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wzmf to disappear
Oct  2 13:41:03.077: INFO: Pod pod-subpath-test-preprovisionedpv-wzmf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wzmf
Oct  2 13:41:03.078: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wzmf" in namespace "provisioning-4648"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 94 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:08.219: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 126 lines ...
Oct  2 13:40:52.716: INFO: PersistentVolumeClaim pvc-hjkpr found but phase is Pending instead of Bound.
Oct  2 13:40:54.908: INFO: PersistentVolumeClaim pvc-hjkpr found and phase=Bound (2.382759979s)
Oct  2 13:40:54.908: INFO: Waiting up to 3m0s for PersistentVolume local-ch7qs to have phase Bound
Oct  2 13:40:55.099: INFO: PersistentVolume local-ch7qs found and phase=Bound (191.044497ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4t9w
STEP: Creating a pod to test subpath
Oct  2 13:40:55.680: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4t9w" in namespace "provisioning-5417" to be "Succeeded or Failed"
Oct  2 13:40:55.874: INFO: Pod "pod-subpath-test-preprovisionedpv-4t9w": Phase="Pending", Reason="", readiness=false. Elapsed: 194.564933ms
Oct  2 13:40:58.068: INFO: Pod "pod-subpath-test-preprovisionedpv-4t9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387757687s
Oct  2 13:41:00.260: INFO: Pod "pod-subpath-test-preprovisionedpv-4t9w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579919821s
Oct  2 13:41:02.452: INFO: Pod "pod-subpath-test-preprovisionedpv-4t9w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772006033s
Oct  2 13:41:04.647: INFO: Pod "pod-subpath-test-preprovisionedpv-4t9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.967255369s
STEP: Saw pod success
Oct  2 13:41:04.647: INFO: Pod "pod-subpath-test-preprovisionedpv-4t9w" satisfied condition "Succeeded or Failed"
Oct  2 13:41:04.839: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-4t9w container test-container-subpath-preprovisionedpv-4t9w: <nil>
STEP: delete the pod
Oct  2 13:41:05.242: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4t9w to disappear
Oct  2 13:41:05.433: INFO: Pod pod-subpath-test-preprovisionedpv-4t9w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4t9w
Oct  2 13:41:05.433: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4t9w" in namespace "provisioning-5417"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:10.619: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 93 lines ...
• [SLOW TEST:35.413 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Oct  2 13:40:03.608: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9387ngddp
STEP: creating a claim
Oct  2 13:40:03.800: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-6fhk
STEP: Creating a pod to test atomic-volume-subpath
Oct  2 13:40:04.376: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6fhk" in namespace "provisioning-9387" to be "Succeeded or Failed"
Oct  2 13:40:04.568: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Pending", Reason="", readiness=false. Elapsed: 192.132286ms
Oct  2 13:40:06.760: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384096189s
Oct  2 13:40:08.958: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581624173s
Oct  2 13:40:11.181: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.804770119s
Oct  2 13:40:13.374: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.997894629s
Oct  2 13:40:15.569: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.192413998s
... skipping 12 lines ...
Oct  2 13:40:44.078: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Running", Reason="", readiness=true. Elapsed: 39.702306927s
Oct  2 13:40:46.273: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Running", Reason="", readiness=true. Elapsed: 41.896880021s
Oct  2 13:40:48.464: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Running", Reason="", readiness=true. Elapsed: 44.088193167s
Oct  2 13:40:50.656: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Running", Reason="", readiness=true. Elapsed: 46.28013243s
Oct  2 13:40:52.848: INFO: Pod "pod-subpath-test-dynamicpv-6fhk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.471904881s
STEP: Saw pod success
Oct  2 13:40:52.848: INFO: Pod "pod-subpath-test-dynamicpv-6fhk" satisfied condition "Succeeded or Failed"
Oct  2 13:40:53.039: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-6fhk container test-container-subpath-dynamicpv-6fhk: <nil>
STEP: delete the pod
Oct  2 13:40:53.431: INFO: Waiting for pod pod-subpath-test-dynamicpv-6fhk to disappear
Oct  2 13:40:53.623: INFO: Pod pod-subpath-test-dynamicpv-6fhk no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-6fhk
Oct  2 13:40:53.623: INFO: Deleting pod "pod-subpath-test-dynamicpv-6fhk" in namespace "provisioning-9387"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, no capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":1,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Ingress API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:6.019 seconds]
[sig-network] Ingress API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support creating Ingress API operations [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:41:01.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Oct  2 13:41:02.868: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-4993" to be "Succeeded or Failed"
Oct  2 13:41:03.058: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 189.654358ms
Oct  2 13:41:05.249: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380259457s
Oct  2 13:41:07.439: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.57069207s
Oct  2 13:41:09.630: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76096291s
Oct  2 13:41:11.820: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.950961705s
Oct  2 13:41:11.820: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:41:12.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4993" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":3,"skipped":22,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:6.717 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: too few pods, absolute => should not allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":7,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:12.489: INFO: Driver local doesn't support ext3 -- skipping
... skipping 185 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy logs on node with explicit kubelet port using proxy subresource 
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":2,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:16.636: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Oct  2 13:41:10.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  2 13:41:11.790: INFO: Waiting up to 5m0s for pod "downward-api-ebd50b57-f5f9-4dd8-8397-1ddfaade441c" in namespace "downward-api-2682" to be "Succeeded or Failed"
Oct  2 13:41:11.981: INFO: Pod "downward-api-ebd50b57-f5f9-4dd8-8397-1ddfaade441c": Phase="Pending", Reason="", readiness=false. Elapsed: 191.202238ms
Oct  2 13:41:14.173: INFO: Pod "downward-api-ebd50b57-f5f9-4dd8-8397-1ddfaade441c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383270089s
Oct  2 13:41:16.365: INFO: Pod "downward-api-ebd50b57-f5f9-4dd8-8397-1ddfaade441c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.575337221s
STEP: Saw pod success
Oct  2 13:41:16.365: INFO: Pod "downward-api-ebd50b57-f5f9-4dd8-8397-1ddfaade441c" satisfied condition "Succeeded or Failed"
Oct  2 13:41:16.557: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod downward-api-ebd50b57-f5f9-4dd8-8397-1ddfaade441c container dapi-container: <nil>
STEP: delete the pod
Oct  2 13:41:16.953: INFO: Waiting for pod downward-api-ebd50b57-f5f9-4dd8-8397-1ddfaade441c to disappear
Oct  2 13:41:17.144: INFO: Pod downward-api-ebd50b57-f5f9-4dd8-8397-1ddfaade441c no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.897 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [sig-node] NodeProblemDetector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:41:17.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename node-problem-detector
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
STEP: Destroying namespace "node-problem-detector-3440" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.343 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 5 lines ...
Oct  2 13:41:12.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Oct  2 13:41:13.765: INFO: Waiting up to 5m0s for pod "security-context-1407f02f-c85f-4640-98a2-f1c3dc3c61dd" in namespace "security-context-2167" to be "Succeeded or Failed"
Oct  2 13:41:13.955: INFO: Pod "security-context-1407f02f-c85f-4640-98a2-f1c3dc3c61dd": Phase="Pending", Reason="", readiness=false. Elapsed: 190.731413ms
Oct  2 13:41:16.145: INFO: Pod "security-context-1407f02f-c85f-4640-98a2-f1c3dc3c61dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380762981s
Oct  2 13:41:18.335: INFO: Pod "security-context-1407f02f-c85f-4640-98a2-f1c3dc3c61dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.570464951s
STEP: Saw pod success
Oct  2 13:41:18.335: INFO: Pod "security-context-1407f02f-c85f-4640-98a2-f1c3dc3c61dd" satisfied condition "Succeeded or Failed"
Oct  2 13:41:18.524: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod security-context-1407f02f-c85f-4640-98a2-f1c3dc3c61dd container test-container: <nil>
STEP: delete the pod
Oct  2 13:41:18.917: INFO: Waiting for pod security-context-1407f02f-c85f-4640-98a2-f1c3dc3c61dd to disappear
Oct  2 13:41:19.105: INFO: Pod security-context-1407f02f-c85f-4640-98a2-f1c3dc3c61dd no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.912 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":4,"skipped":46,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:8.555 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":8,"skipped":26,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 52 lines ...
• [SLOW TEST:29.698 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":3,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct  2 13:41:08.283: INFO: PersistentVolumeClaim pvc-666rf found but phase is Pending instead of Bound.
Oct  2 13:41:10.475: INFO: PersistentVolumeClaim pvc-666rf found and phase=Bound (8.954181525s)
Oct  2 13:41:10.475: INFO: Waiting up to 3m0s for PersistentVolume local-jtw4h to have phase Bound
Oct  2 13:41:10.668: INFO: PersistentVolume local-jtw4h found and phase=Bound (193.122128ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-46hg
STEP: Creating a pod to test subpath
Oct  2 13:41:11.239: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-46hg" in namespace "provisioning-1742" to be "Succeeded or Failed"
Oct  2 13:41:11.430: INFO: Pod "pod-subpath-test-preprovisionedpv-46hg": Phase="Pending", Reason="", readiness=false. Elapsed: 190.05545ms
Oct  2 13:41:13.620: INFO: Pod "pod-subpath-test-preprovisionedpv-46hg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380939755s
Oct  2 13:41:15.812: INFO: Pod "pod-subpath-test-preprovisionedpv-46hg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572068663s
Oct  2 13:41:18.002: INFO: Pod "pod-subpath-test-preprovisionedpv-46hg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76202219s
Oct  2 13:41:20.193: INFO: Pod "pod-subpath-test-preprovisionedpv-46hg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.953293243s
STEP: Saw pod success
Oct  2 13:41:20.193: INFO: Pod "pod-subpath-test-preprovisionedpv-46hg" satisfied condition "Succeeded or Failed"
Oct  2 13:41:20.383: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-46hg container test-container-subpath-preprovisionedpv-46hg: <nil>
STEP: delete the pod
Oct  2 13:41:20.778: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-46hg to disappear
Oct  2 13:41:20.967: INFO: Pod pod-subpath-test-preprovisionedpv-46hg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-46hg
Oct  2 13:41:20.967: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-46hg" in namespace "provisioning-1742"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:26.126: INFO: Only supported for providers [openstack] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Oct  2 13:41:20.686: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-3e2d319b-9ab3-48ad-ac7b-18f29354d6a6" in namespace "security-context-test-9873" to be "Succeeded or Failed"
Oct  2 13:41:20.875: INFO: Pod "alpine-nnp-true-3e2d319b-9ab3-48ad-ac7b-18f29354d6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 188.838447ms
Oct  2 13:41:23.064: INFO: Pod "alpine-nnp-true-3e2d319b-9ab3-48ad-ac7b-18f29354d6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377843717s
Oct  2 13:41:25.256: INFO: Pod "alpine-nnp-true-3e2d319b-9ab3-48ad-ac7b-18f29354d6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570454547s
Oct  2 13:41:27.447: INFO: Pod "alpine-nnp-true-3e2d319b-9ab3-48ad-ac7b-18f29354d6a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.760861186s
Oct  2 13:41:27.447: INFO: Pod "alpine-nnp-true-3e2d319b-9ab3-48ad-ac7b-18f29354d6a6" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:41:27.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9873" for this suite.


... skipping 27 lines ...
Oct  2 13:41:07.091: INFO: PersistentVolumeClaim pvc-wc44c found but phase is Pending instead of Bound.
Oct  2 13:41:09.281: INFO: PersistentVolumeClaim pvc-wc44c found and phase=Bound (4.578522508s)
Oct  2 13:41:09.281: INFO: Waiting up to 3m0s for PersistentVolume local-psctc to have phase Bound
Oct  2 13:41:09.470: INFO: PersistentVolume local-psctc found and phase=Bound (188.796218ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-w27q
STEP: Creating a pod to test subpath
Oct  2 13:41:10.043: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w27q" in namespace "provisioning-5609" to be "Succeeded or Failed"
Oct  2 13:41:10.232: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q": Phase="Pending", Reason="", readiness=false. Elapsed: 189.064196ms
Oct  2 13:41:12.423: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380204394s
Oct  2 13:41:14.613: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570298951s
Oct  2 13:41:16.803: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.760357536s
STEP: Saw pod success
Oct  2 13:41:16.803: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q" satisfied condition "Succeeded or Failed"
Oct  2 13:41:16.993: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-w27q container test-container-subpath-preprovisionedpv-w27q: <nil>
STEP: delete the pod
Oct  2 13:41:17.384: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w27q to disappear
Oct  2 13:41:17.575: INFO: Pod pod-subpath-test-preprovisionedpv-w27q no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-w27q
Oct  2 13:41:17.575: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w27q" in namespace "provisioning-5609"
STEP: Creating pod pod-subpath-test-preprovisionedpv-w27q
STEP: Creating a pod to test subpath
Oct  2 13:41:17.960: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w27q" in namespace "provisioning-5609" to be "Succeeded or Failed"
Oct  2 13:41:18.149: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q": Phase="Pending", Reason="", readiness=false. Elapsed: 188.925814ms
Oct  2 13:41:20.353: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392780889s
Oct  2 13:41:22.545: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.584605315s
Oct  2 13:41:24.735: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.774463519s
STEP: Saw pod success
Oct  2 13:41:24.735: INFO: Pod "pod-subpath-test-preprovisionedpv-w27q" satisfied condition "Succeeded or Failed"
Oct  2 13:41:24.924: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-w27q container test-container-subpath-preprovisionedpv-w27q: <nil>
STEP: delete the pod
Oct  2 13:41:25.316: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w27q to disappear
Oct  2 13:41:25.505: INFO: Pod pod-subpath-test-preprovisionedpv-w27q no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-w27q
Oct  2 13:41:25.506: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w27q" in namespace "provisioning-5609"
... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":31,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] HostPort
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:22.815 seconds]
[sig-network] HostPort
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:29.927: INFO: Only supported for providers [gce gke] (not aws)
... skipping 119 lines ...
• [SLOW TEST:19.010 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:30.529: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 82 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286

      Disabled temporarily, reopen after #73168 is fixed

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:41:28.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-9efca7b5-ed58-4b9b-ab0f-cfd333784dde
STEP: Creating a pod to test consume configMaps
Oct  2 13:41:29.447: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2841d279-d119-4350-a28e-0180a8606dda" in namespace "projected-9261" to be "Succeeded or Failed"
Oct  2 13:41:29.637: INFO: Pod "pod-projected-configmaps-2841d279-d119-4350-a28e-0180a8606dda": Phase="Pending", Reason="", readiness=false. Elapsed: 189.247501ms
Oct  2 13:41:31.826: INFO: Pod "pod-projected-configmaps-2841d279-d119-4350-a28e-0180a8606dda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.379120487s
STEP: Saw pod success
Oct  2 13:41:31.826: INFO: Pod "pod-projected-configmaps-2841d279-d119-4350-a28e-0180a8606dda" satisfied condition "Succeeded or Failed"
Oct  2 13:41:32.016: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-projected-configmaps-2841d279-d119-4350-a28e-0180a8606dda container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:41:32.405: INFO: Waiting for pod pod-projected-configmaps-2841d279-d119-4350-a28e-0180a8606dda to disappear
Oct  2 13:41:32.594: INFO: Pod pod-projected-configmaps-2841d279-d119-4350-a28e-0180a8606dda no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:41:32.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9261" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:32.987: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 83 lines ...
Oct  2 13:41:22.674: INFO: PersistentVolumeClaim pvc-hqg9b found but phase is Pending instead of Bound.
Oct  2 13:41:24.862: INFO: PersistentVolumeClaim pvc-hqg9b found and phase=Bound (11.152218014s)
Oct  2 13:41:24.862: INFO: Waiting up to 3m0s for PersistentVolume local-4nzqc to have phase Bound
Oct  2 13:41:25.050: INFO: PersistentVolume local-4nzqc found and phase=Bound (187.739126ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hzv2
STEP: Creating a pod to test subpath
Oct  2 13:41:25.617: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hzv2" in namespace "provisioning-1392" to be "Succeeded or Failed"
Oct  2 13:41:25.805: INFO: Pod "pod-subpath-test-preprovisionedpv-hzv2": Phase="Pending", Reason="", readiness=false. Elapsed: 187.8568ms
Oct  2 13:41:27.995: INFO: Pod "pod-subpath-test-preprovisionedpv-hzv2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377067197s
Oct  2 13:41:30.184: INFO: Pod "pod-subpath-test-preprovisionedpv-hzv2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.566404951s
STEP: Saw pod success
Oct  2 13:41:30.184: INFO: Pod "pod-subpath-test-preprovisionedpv-hzv2" satisfied condition "Succeeded or Failed"
Oct  2 13:41:30.373: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-hzv2 container test-container-volume-preprovisionedpv-hzv2: <nil>
STEP: delete the pod
Oct  2 13:41:30.764: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hzv2 to disappear
Oct  2 13:41:30.952: INFO: Pod pod-subpath-test-preprovisionedpv-hzv2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hzv2
Oct  2 13:41:30.952: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hzv2" in namespace "provisioning-1392"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":38,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:36.885 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":3,"skipped":9,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":50,"failed":0}
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:41:28.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sysctl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.558 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:34.605: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
STEP: Destroying namespace "services-8331" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 21 lines ...
Oct  2 13:41:23.342: INFO: PersistentVolumeClaim pvc-9fc9c found but phase is Pending instead of Bound.
Oct  2 13:41:25.536: INFO: PersistentVolumeClaim pvc-9fc9c found and phase=Bound (4.581253706s)
Oct  2 13:41:25.536: INFO: Waiting up to 3m0s for PersistentVolume local-s5r4w to have phase Bound
Oct  2 13:41:25.731: INFO: PersistentVolume local-s5r4w found and phase=Bound (194.701717ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-wxj8
STEP: Creating a pod to test exec-volume-test
Oct  2 13:41:26.314: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-wxj8" in namespace "volume-8037" to be "Succeeded or Failed"
Oct  2 13:41:26.510: INFO: Pod "exec-volume-test-preprovisionedpv-wxj8": Phase="Pending", Reason="", readiness=false. Elapsed: 195.955273ms
Oct  2 13:41:28.704: INFO: Pod "exec-volume-test-preprovisionedpv-wxj8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389394967s
Oct  2 13:41:30.898: INFO: Pod "exec-volume-test-preprovisionedpv-wxj8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.584046353s
STEP: Saw pod success
Oct  2 13:41:30.898: INFO: Pod "exec-volume-test-preprovisionedpv-wxj8" satisfied condition "Succeeded or Failed"
Oct  2 13:41:31.091: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod exec-volume-test-preprovisionedpv-wxj8 container exec-container-preprovisionedpv-wxj8: <nil>
STEP: delete the pod
Oct  2 13:41:31.497: INFO: Waiting for pod exec-volume-test-preprovisionedpv-wxj8 to disappear
Oct  2 13:41:31.691: INFO: Pod exec-volume-test-preprovisionedpv-wxj8 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-wxj8
Oct  2 13:41:31.691: INFO: Deleting pod "exec-volume-test-preprovisionedpv-wxj8" in namespace "volume-8037"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:41:38.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4537" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:38.845: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-53486ccd-7991-4ccc-8287-1e20e891daf9
STEP: Creating a pod to test consume secrets
Oct  2 13:41:31.283: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29" in namespace "projected-5381" to be "Succeeded or Failed"
Oct  2 13:41:31.480: INFO: Pod "pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29": Phase="Pending", Reason="", readiness=false. Elapsed: 197.425357ms
Oct  2 13:41:33.671: INFO: Pod "pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388488531s
Oct  2 13:41:35.898: INFO: Pod "pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.615169693s
Oct  2 13:41:38.089: INFO: Pod "pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.806414209s
STEP: Saw pod success
Oct  2 13:41:38.089: INFO: Pod "pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29" satisfied condition "Succeeded or Failed"
Oct  2 13:41:38.280: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  2 13:41:38.668: INFO: Waiting for pod pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29 to disappear
Oct  2 13:41:38.860: INFO: Pod pod-projected-secrets-7f74f89f-a9f5-4560-911c-438ea532eb29 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.306 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:793
    should reuse port when apply to an existing SVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:807
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":5,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:39.654: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:40:03.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 115 lines ...
Oct  2 13:41:02.915: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-4746rssws
STEP: creating a claim
Oct  2 13:41:03.107: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-hp4r
STEP: Creating a pod to test exec-volume-test
Oct  2 13:41:03.695: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-hp4r" in namespace "volume-4746" to be "Succeeded or Failed"
Oct  2 13:41:03.886: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 191.218959ms
Oct  2 13:41:06.078: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383153489s
Oct  2 13:41:08.270: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574498975s
Oct  2 13:41:10.466: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.770384764s
Oct  2 13:41:12.658: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962729807s
Oct  2 13:41:14.854: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 11.159108098s
Oct  2 13:41:17.046: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 13.350967884s
Oct  2 13:41:19.246: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 15.550412396s
Oct  2 13:41:21.438: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 17.743083159s
Oct  2 13:41:23.630: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Pending", Reason="", readiness=false. Elapsed: 19.935064646s
Oct  2 13:41:25.823: INFO: Pod "exec-volume-test-dynamicpv-hp4r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.127331423s
STEP: Saw pod success
Oct  2 13:41:25.823: INFO: Pod "exec-volume-test-dynamicpv-hp4r" satisfied condition "Succeeded or Failed"
Oct  2 13:41:26.014: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod exec-volume-test-dynamicpv-hp4r container exec-container-dynamicpv-hp4r: <nil>
STEP: delete the pod
Oct  2 13:41:26.410: INFO: Waiting for pod exec-volume-test-dynamicpv-hp4r to disappear
Oct  2 13:41:26.602: INFO: Pod exec-volume-test-dynamicpv-hp4r no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-hp4r
Oct  2 13:41:26.602: INFO: Deleting pod "exec-volume-test-dynamicpv-hp4r" in namespace "volume-4746"
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":27,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:43.769: INFO: Only supported for providers [openstack] (not aws)
... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:44.293: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76 lines ...
Oct  2 13:40:53.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Oct  2 13:40:54.572: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 13:40:54.958: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-2706" in namespace "volume-2706" to be "Succeeded or Failed"
Oct  2 13:40:55.148: INFO: Pod "hostpath-symlink-prep-volume-2706": Phase="Pending", Reason="", readiness=false. Elapsed: 189.878812ms
Oct  2 13:40:57.338: INFO: Pod "hostpath-symlink-prep-volume-2706": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.380213826s
STEP: Saw pod success
Oct  2 13:40:57.338: INFO: Pod "hostpath-symlink-prep-volume-2706" satisfied condition "Succeeded or Failed"
Oct  2 13:40:57.338: INFO: Deleting pod "hostpath-symlink-prep-volume-2706" in namespace "volume-2706"
Oct  2 13:40:57.533: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-2706" to be fully deleted
Oct  2 13:40:57.723: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Oct  2 13:41:06.300: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-2706 exec hostpathsymlink-injector --namespace=volume-2706 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-2706' > /opt/0/index.html'
... skipping 36 lines ...
Oct  2 13:41:39.002: INFO: Pod hostpathsymlink-client still exists
Oct  2 13:41:40.812: INFO: Waiting for pod hostpathsymlink-client to disappear
Oct  2 13:41:41.002: INFO: Pod hostpathsymlink-client still exists
Oct  2 13:41:42.811: INFO: Waiting for pod hostpathsymlink-client to disappear
Oct  2 13:41:43.002: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Oct  2 13:41:43.196: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-2706" in namespace "volume-2706" to be "Succeeded or Failed"
Oct  2 13:41:43.386: INFO: Pod "hostpath-symlink-prep-volume-2706": Phase="Pending", Reason="", readiness=false. Elapsed: 190.486793ms
Oct  2 13:41:45.581: INFO: Pod "hostpath-symlink-prep-volume-2706": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.385659754s
STEP: Saw pod success
Oct  2 13:41:45.581: INFO: Pod "hostpath-symlink-prep-volume-2706" satisfied condition "Succeeded or Failed"
Oct  2 13:41:45.581: INFO: Deleting pod "hostpath-symlink-prep-volume-2706" in namespace "volume-2706"
Oct  2 13:41:45.780: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-2706" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:41:45.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2706" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":6,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":2,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:47.051: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 92 lines ...
• [SLOW TEST:20.207 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":6,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:50.805: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 217 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":1,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:52.830: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:41:55.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6146" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
Oct  2 13:40:37.690: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7235
Oct  2 13:40:37.888: INFO: creating *v1.StatefulSet: csi-mock-volumes-7235-4100/csi-mockplugin-attacher
Oct  2 13:40:38.078: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7235"
Oct  2 13:40:38.266: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7235 to register on node ip-172-20-42-183.ap-southeast-2.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Oct  2 13:41:00.444: INFO: Error getting logs for pod inline-volume-52sn7: the server rejected our request for an unknown reason (get pods inline-volume-52sn7)
Oct  2 13:41:00.641: INFO: Deleting pod "inline-volume-52sn7" in namespace "csi-mock-volumes-7235"
Oct  2 13:41:00.832: INFO: Wait up to 5m0s for pod "inline-volume-52sn7" to be fully deleted
STEP: Deleting the previously created pod
Oct  2 13:41:05.210: INFO: Deleting pod "pvc-volume-tester-wnkw6" in namespace "csi-mock-volumes-7235"
Oct  2 13:41:05.402: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wnkw6" to be fully deleted
STEP: Checking CSI driver logs
Oct  2 13:41:15.977: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-wnkw6
Oct  2 13:41:15.977: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-7235
Oct  2 13:41:15.977: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 5a769406-e507-4598-bcb2-7ea4a35adb54
Oct  2 13:41:15.977: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct  2 13:41:15.977: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Oct  2 13:41:15.977: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-bd8598b7aa8228c50ceaacc8c10c9ceb59c51650210adadcefe0e71e36d3397e","target_path":"/var/lib/kubelet/pods/5a769406-e507-4598-bcb2-7ea4a35adb54/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-wnkw6
Oct  2 13:41:15.978: INFO: Deleting pod "pvc-volume-tester-wnkw6" in namespace "csi-mock-volumes-7235"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-7235
STEP: Waiting for namespaces [csi-mock-volumes-7235] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:57.224: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 87 lines ...
Oct  2 13:41:50.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  2 13:41:52.035: INFO: Waiting up to 5m0s for pod "security-context-ebc44162-50c3-47ec-9b34-10804e35e80e" in namespace "security-context-211" to be "Succeeded or Failed"
Oct  2 13:41:52.228: INFO: Pod "security-context-ebc44162-50c3-47ec-9b34-10804e35e80e": Phase="Pending", Reason="", readiness=false. Elapsed: 192.489885ms
Oct  2 13:41:54.425: INFO: Pod "security-context-ebc44162-50c3-47ec-9b34-10804e35e80e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389412674s
Oct  2 13:41:56.618: INFO: Pod "security-context-ebc44162-50c3-47ec-9b34-10804e35e80e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.583022561s
STEP: Saw pod success
Oct  2 13:41:56.618: INFO: Pod "security-context-ebc44162-50c3-47ec-9b34-10804e35e80e" satisfied condition "Succeeded or Failed"
Oct  2 13:41:56.811: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod security-context-ebc44162-50c3-47ec-9b34-10804e35e80e container test-container: <nil>
STEP: delete the pod
Oct  2 13:41:57.209: INFO: Waiting for pod security-context-ebc44162-50c3-47ec-9b34-10804e35e80e to disappear
Oct  2 13:41:57.406: INFO: Pod security-context-ebc44162-50c3-47ec-9b34-10804e35e80e no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.953 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:57.836: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 79 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903
    should check if kubectl can dry-run update Pods [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:41:59.007: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 13:41:57.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85c22d0d-86c0-4ef2-9a69-193a8c90bed6" in namespace "projected-5816" to be "Succeeded or Failed"
Oct  2 13:41:57.399: INFO: Pod "downwardapi-volume-85c22d0d-86c0-4ef2-9a69-193a8c90bed6": Phase="Pending", Reason="", readiness=false. Elapsed: 201.33177ms
Oct  2 13:41:59.592: INFO: Pod "downwardapi-volume-85c22d0d-86c0-4ef2-9a69-193a8c90bed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.394666119s
STEP: Saw pod success
Oct  2 13:41:59.592: INFO: Pod "downwardapi-volume-85c22d0d-86c0-4ef2-9a69-193a8c90bed6" satisfied condition "Succeeded or Failed"
Oct  2 13:41:59.794: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod downwardapi-volume-85c22d0d-86c0-4ef2-9a69-193a8c90bed6 container client-container: <nil>
STEP: delete the pod
Oct  2 13:42:00.195: INFO: Waiting for pod downwardapi-volume-85c22d0d-86c0-4ef2-9a69-193a8c90bed6 to disappear
Oct  2 13:42:00.386: INFO: Pod downwardapi-volume-85c22d0d-86c0-4ef2-9a69-193a8c90bed6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:42:00.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5816" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Oct  2 13:41:53.394: INFO: PersistentVolumeClaim pvc-6252d found but phase is Pending instead of Bound.
Oct  2 13:41:55.586: INFO: PersistentVolumeClaim pvc-6252d found and phase=Bound (4.575005521s)
Oct  2 13:41:55.586: INFO: Waiting up to 3m0s for PersistentVolume local-57rp2 to have phase Bound
Oct  2 13:41:55.780: INFO: PersistentVolume local-57rp2 found and phase=Bound (193.626053ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zsx5
STEP: Creating a pod to test subpath
Oct  2 13:41:56.349: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zsx5" in namespace "provisioning-6891" to be "Succeeded or Failed"
Oct  2 13:41:56.538: INFO: Pod "pod-subpath-test-preprovisionedpv-zsx5": Phase="Pending", Reason="", readiness=false. Elapsed: 189.155106ms
Oct  2 13:41:58.729: INFO: Pod "pod-subpath-test-preprovisionedpv-zsx5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379889167s
Oct  2 13:42:00.920: INFO: Pod "pod-subpath-test-preprovisionedpv-zsx5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.57053265s
STEP: Saw pod success
Oct  2 13:42:00.920: INFO: Pod "pod-subpath-test-preprovisionedpv-zsx5" satisfied condition "Succeeded or Failed"
Oct  2 13:42:01.111: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-zsx5 container test-container-subpath-preprovisionedpv-zsx5: <nil>
STEP: delete the pod
Oct  2 13:42:01.499: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zsx5 to disappear
Oct  2 13:42:01.690: INFO: Pod pod-subpath-test-preprovisionedpv-zsx5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zsx5
Oct  2 13:42:01.690: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zsx5" in namespace "provisioning-6891"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:04.280: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:04.641: INFO: Only supported for providers [gce gke] (not aws)
... skipping 103 lines ...
• [SLOW TEST:19.288 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:05.702: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68 lines ...
Oct  2 13:41:51.670: INFO: PersistentVolumeClaim pvc-sjs9j found but phase is Pending instead of Bound.
Oct  2 13:41:53.860: INFO: PersistentVolumeClaim pvc-sjs9j found and phase=Bound (6.763514677s)
Oct  2 13:41:53.860: INFO: Waiting up to 3m0s for PersistentVolume local-dwkrg to have phase Bound
Oct  2 13:41:54.049: INFO: PersistentVolume local-dwkrg found and phase=Bound (189.303831ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qvbx
STEP: Creating a pod to test subpath
Oct  2 13:41:54.620: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qvbx" in namespace "provisioning-7292" to be "Succeeded or Failed"
Oct  2 13:41:54.809: INFO: Pod "pod-subpath-test-preprovisionedpv-qvbx": Phase="Pending", Reason="", readiness=false. Elapsed: 189.193191ms
Oct  2 13:41:56.999: INFO: Pod "pod-subpath-test-preprovisionedpv-qvbx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378850875s
Oct  2 13:41:59.189: INFO: Pod "pod-subpath-test-preprovisionedpv-qvbx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.56876221s
Oct  2 13:42:01.378: INFO: Pod "pod-subpath-test-preprovisionedpv-qvbx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.758220625s
Oct  2 13:42:03.568: INFO: Pod "pod-subpath-test-preprovisionedpv-qvbx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.94854286s
STEP: Saw pod success
Oct  2 13:42:03.569: INFO: Pod "pod-subpath-test-preprovisionedpv-qvbx" satisfied condition "Succeeded or Failed"
Oct  2 13:42:03.759: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-qvbx container test-container-volume-preprovisionedpv-qvbx: <nil>
STEP: delete the pod
Oct  2 13:42:04.159: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qvbx to disappear
Oct  2 13:42:04.349: INFO: Pod pod-subpath-test-preprovisionedpv-qvbx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qvbx
Oct  2 13:42:04.349: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qvbx" in namespace "provisioning-7292"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:15.240: INFO: Only supported for providers [gce gke] (not aws)
... skipping 89 lines ...
• [SLOW TEST:19.436 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":8,"skipped":42,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:17.327: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
• [SLOW TEST:10.182 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":9,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:21.110: INFO: Only supported for providers [openstack] (not aws)
... skipping 35 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":52,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:41:58.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 106 lines ...
• [SLOW TEST:22.837 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":5,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 137 lines ...
• [SLOW TEST:76.472 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:27.502: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:28.606: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 86 lines ...
• [SLOW TEST:5.238 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Oct  2 13:42:05.685: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  2 13:42:05.685: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zd6m
STEP: Creating a pod to test atomic-volume-subpath
Oct  2 13:42:05.884: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zd6m" in namespace "provisioning-5062" to be "Succeeded or Failed"
Oct  2 13:42:06.074: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Pending", Reason="", readiness=false. Elapsed: 190.577273ms
Oct  2 13:42:08.267: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382692526s
Oct  2 13:42:10.459: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575052103s
Oct  2 13:42:12.655: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.77072127s
Oct  2 13:42:14.856: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Running", Reason="", readiness=true. Elapsed: 8.972232256s
Oct  2 13:42:17.048: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Running", Reason="", readiness=true. Elapsed: 11.164383899s
... skipping 3 lines ...
Oct  2 13:42:25.861: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Running", Reason="", readiness=true. Elapsed: 19.976608619s
Oct  2 13:42:28.053: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Running", Reason="", readiness=true. Elapsed: 22.169315258s
Oct  2 13:42:30.245: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Running", Reason="", readiness=true. Elapsed: 24.360598275s
Oct  2 13:42:32.436: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Running", Reason="", readiness=true. Elapsed: 26.552172952s
Oct  2 13:42:34.628: INFO: Pod "pod-subpath-test-inlinevolume-zd6m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.743680089s
STEP: Saw pod success
Oct  2 13:42:34.628: INFO: Pod "pod-subpath-test-inlinevolume-zd6m" satisfied condition "Succeeded or Failed"
Oct  2 13:42:34.819: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-zd6m container test-container-subpath-inlinevolume-zd6m: <nil>
STEP: delete the pod
Oct  2 13:42:35.214: INFO: Waiting for pod pod-subpath-test-inlinevolume-zd6m to disappear
Oct  2 13:42:35.405: INFO: Pod pod-subpath-test-inlinevolume-zd6m no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zd6m
Oct  2 13:42:35.405: INFO: Deleting pod "pod-subpath-test-inlinevolume-zd6m" in namespace "provisioning-5062"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:36.181: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:36.688: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 65 lines ...
• [SLOW TEST:9.669 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
Oct  2 13:41:06.240: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3846
Oct  2 13:41:06.430: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3846
Oct  2 13:41:06.620: INFO: creating *v1.StatefulSet: csi-mock-volumes-3846-7255/csi-mockplugin
Oct  2 13:41:06.824: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3846
Oct  2 13:41:07.017: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3846"
Oct  2 13:41:07.206: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3846 to register on node ip-172-20-33-188.ap-southeast-2.compute.internal
I1002 13:41:13.495160   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1002 13:41:13.693055   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3846","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1002 13:41:13.907016   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I1002 13:41:14.099037   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1002 13:41:14.545750   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3846","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1002 13:41:15.542700   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3846"},"Error":"","FullError":null}
STEP: Creating pod
Oct  2 13:41:17.853: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I1002 13:41:18.258173   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1002 13:41:18.453539   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866"}}},"Error":"","FullError":null}
I1002 13:41:19.508622   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  2 13:41:19.719: INFO: >>> kubeConfig: /root/.kube/config
I1002 13:41:21.065090   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866","storage.kubernetes.io/csiProvisionerIdentity":"1633182074191-8081-csi-mock-csi-mock-volumes-3846"}},"Response":{},"Error":"","FullError":null}
I1002 13:41:21.624871   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  2 13:41:21.816: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 13:41:23.070: INFO: >>> kubeConfig: /root/.kube/config
Oct  2 13:41:24.378: INFO: >>> kubeConfig: /root/.kube/config
I1002 13:41:25.675811   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866/globalmount","target_path":"/var/lib/kubelet/pods/2ed9a5e3-aff1-480b-a290-9470ffcc9981/volumes/kubernetes.io~csi/pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866","storage.kubernetes.io/csiProvisionerIdentity":"1633182074191-8081-csi-mock-csi-mock-volumes-3846"}},"Response":{},"Error":"","FullError":null}
I1002 13:41:28.616133   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1002 13:41:28.805555   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/2ed9a5e3-aff1-480b-a290-9470ffcc9981/volumes/kubernetes.io~csi/pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null}
Oct  2 13:41:30.616: INFO: Deleting pod "pvc-volume-tester-9htfk" in namespace "csi-mock-volumes-3846"
Oct  2 13:41:30.807: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9htfk" to be fully deleted
Oct  2 13:41:33.240: INFO: >>> kubeConfig: /root/.kube/config
I1002 13:41:34.526745   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2ed9a5e3-aff1-480b-a290-9470ffcc9981/volumes/kubernetes.io~csi/pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866/mount"},"Response":{},"Error":"","FullError":null}
I1002 13:41:34.762291   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1002 13:41:34.977068   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866/globalmount"},"Response":{},"Error":"","FullError":null}
I1002 13:41:39.400362   14174 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct  2 13:41:40.378: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-j25rv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3846", SelfLink:"", UID:"7f56965a-acb8-4e4e-af1d-7811cd3c1866", ResourceVersion:"4522", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768778877, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000aaa078), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000aaa090)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0035b2040), VolumeMode:(*v1.PersistentVolumeMode)(0xc0035b2050), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  2 13:41:40.379: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-j25rv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3846", SelfLink:"", UID:"7f56965a-acb8-4e4e-af1d-7811cd3c1866", ResourceVersion:"4529", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768778877, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-33-188.ap-southeast-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000aaa2b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000aaa2d0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000aaa2e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000aaa300)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0035b21f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0035b2200), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  2 13:41:40.379: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-j25rv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3846", SelfLink:"", UID:"7f56965a-acb8-4e4e-af1d-7811cd3c1866", ResourceVersion:"4530", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768778877, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3846", "volume.kubernetes.io/selected-node":"ip-172-20-33-188.ap-southeast-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002839d40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002839d58)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002839d70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002839d88)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002839da0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002839db8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0008addc0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0008ade40), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  2 13:41:40.379: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-j25rv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3846", SelfLink:"", UID:"7f56965a-acb8-4e4e-af1d-7811cd3c1866", ResourceVersion:"4536", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768778877, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3846", "volume.kubernetes.io/selected-node":"ip-172-20-33-188.ap-southeast-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003de6e40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003de6e58)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003de6e70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003de6e88)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003de6ea0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003de6eb8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866", StorageClassName:(*string)(0xc003709550), VolumeMode:(*v1.PersistentVolumeMode)(0xc003709560), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  2 13:41:40.379: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-j25rv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3846", SelfLink:"", UID:"7f56965a-acb8-4e4e-af1d-7811cd3c1866", ResourceVersion:"4537", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768778877, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3846", "volume.kubernetes.io/selected-node":"ip-172-20-33-188.ap-southeast-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003de6ee8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003de6f00)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003de6f18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003de6f30)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003de6f48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003de6f60)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-7f56965a-acb8-4e4e-af1d-7811cd3c1866", StorageClassName:(*string)(0xc003709590), VolumeMode:(*v1.PersistentVolumeMode)(0xc0037095a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:46.273: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 72 lines ...
Oct  2 13:41:47.237: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct  2 13:41:47.435: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath4bwxg] to have phase Bound
Oct  2 13:41:47.630: INFO: PersistentVolumeClaim csi-hostpath4bwxg found but phase is Pending instead of Bound.
Oct  2 13:41:49.824: INFO: PersistentVolumeClaim csi-hostpath4bwxg found and phase=Bound (2.38928597s)
STEP: Creating pod pod-subpath-test-dynamicpv-jfv8
STEP: Creating a pod to test subpath
Oct  2 13:41:50.407: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-jfv8" in namespace "provisioning-3835" to be "Succeeded or Failed"
Oct  2 13:41:50.600: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 193.15446ms
Oct  2 13:41:52.796: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388598097s
Oct  2 13:41:54.997: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.589370817s
Oct  2 13:41:57.192: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.784653708s
Oct  2 13:41:59.388: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.980399172s
Oct  2 13:42:01.582: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.17467474s
... skipping 3 lines ...
Oct  2 13:42:10.368: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.960533479s
Oct  2 13:42:12.571: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.163851369s
Oct  2 13:42:14.765: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.358060064s
Oct  2 13:42:16.961: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.553904278s
Oct  2 13:42:19.161: INFO: Pod "pod-subpath-test-dynamicpv-jfv8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.753821712s
STEP: Saw pod success
Oct  2 13:42:19.161: INFO: Pod "pod-subpath-test-dynamicpv-jfv8" satisfied condition "Succeeded or Failed"
Oct  2 13:42:19.399: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-jfv8 container test-container-subpath-dynamicpv-jfv8: <nil>
STEP: delete the pod
Oct  2 13:42:19.811: INFO: Waiting for pod pod-subpath-test-dynamicpv-jfv8 to disappear
Oct  2 13:42:20.004: INFO: Pod pod-subpath-test-dynamicpv-jfv8 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-jfv8
Oct  2 13:42:20.004: INFO: Deleting pod "pod-subpath-test-dynamicpv-jfv8" in namespace "provisioning-3835"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 62 lines ...
Oct  2 13:41:52.993: INFO: PersistentVolumeClaim csi-hostpath2svm2 found but phase is Pending instead of Bound.
Oct  2 13:41:55.204: INFO: PersistentVolumeClaim csi-hostpath2svm2 found but phase is Pending instead of Bound.
Oct  2 13:41:57.402: INFO: PersistentVolumeClaim csi-hostpath2svm2 found but phase is Pending instead of Bound.
Oct  2 13:41:59.592: INFO: PersistentVolumeClaim csi-hostpath2svm2 found and phase=Bound (24.318646359s)
STEP: Creating pod pod-subpath-test-dynamicpv-hzs2
STEP: Creating a pod to test subpath
Oct  2 13:42:00.169: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hzs2" in namespace "provisioning-5029" to be "Succeeded or Failed"
Oct  2 13:42:00.358: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 189.387218ms
Oct  2 13:42:02.549: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380661973s
Oct  2 13:42:04.742: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572796461s
Oct  2 13:42:06.933: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76385092s
Oct  2 13:42:09.125: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.955743615s
Oct  2 13:42:11.316: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.146869719s
Oct  2 13:42:13.506: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.337024755s
Oct  2 13:42:15.696: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.527533644s
Oct  2 13:42:17.887: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.71832785s
Oct  2 13:42:20.096: INFO: Pod "pod-subpath-test-dynamicpv-hzs2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.927469783s
STEP: Saw pod success
Oct  2 13:42:20.096: INFO: Pod "pod-subpath-test-dynamicpv-hzs2" satisfied condition "Succeeded or Failed"
Oct  2 13:42:20.296: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-hzs2 container test-container-volume-dynamicpv-hzs2: <nil>
STEP: delete the pod
Oct  2 13:42:20.707: INFO: Waiting for pod pod-subpath-test-dynamicpv-hzs2 to disappear
Oct  2 13:42:20.896: INFO: Pod pod-subpath-test-dynamicpv-hzs2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hzs2
Oct  2 13:42:20.896: INFO: Deleting pod "pod-subpath-test-dynamicpv-hzs2" in namespace "provisioning-5029"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:42:48.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1885" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":5,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:49.005: INFO: Only supported for providers [gce gke] (not aws)
... skipping 86 lines ...
Oct  2 13:42:37.040: INFO: PersistentVolumeClaim pvc-x5lh2 found but phase is Pending instead of Bound.
Oct  2 13:42:39.231: INFO: PersistentVolumeClaim pvc-x5lh2 found and phase=Bound (6.761123019s)
Oct  2 13:42:39.231: INFO: Waiting up to 3m0s for PersistentVolume local-cw82f to have phase Bound
Oct  2 13:42:39.422: INFO: PersistentVolume local-cw82f found and phase=Bound (191.446539ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2wr2
STEP: Creating a pod to test subpath
Oct  2 13:42:39.992: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2wr2" in namespace "provisioning-4832" to be "Succeeded or Failed"
Oct  2 13:42:40.185: INFO: Pod "pod-subpath-test-preprovisionedpv-2wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 193.042121ms
Oct  2 13:42:42.375: INFO: Pod "pod-subpath-test-preprovisionedpv-2wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382674789s
Oct  2 13:42:44.565: INFO: Pod "pod-subpath-test-preprovisionedpv-2wr2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573250478s
Oct  2 13:42:46.756: INFO: Pod "pod-subpath-test-preprovisionedpv-2wr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.763348298s
STEP: Saw pod success
Oct  2 13:42:46.756: INFO: Pod "pod-subpath-test-preprovisionedpv-2wr2" satisfied condition "Succeeded or Failed"
Oct  2 13:42:46.946: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-2wr2 container test-container-subpath-preprovisionedpv-2wr2: <nil>
STEP: delete the pod
Oct  2 13:42:47.336: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2wr2 to disappear
Oct  2 13:42:47.525: INFO: Pod pod-subpath-test-preprovisionedpv-2wr2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2wr2
Oct  2 13:42:47.525: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2wr2" in namespace "provisioning-4832"
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-zcmx
STEP: Creating a pod to test atomic-volume-subpath
Oct  2 13:42:24.455: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zcmx" in namespace "subpath-3510" to be "Succeeded or Failed"
Oct  2 13:42:24.647: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Pending", Reason="", readiness=false. Elapsed: 191.491082ms
Oct  2 13:42:26.851: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396006932s
Oct  2 13:42:29.043: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587557681s
Oct  2 13:42:31.235: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Running", Reason="", readiness=true. Elapsed: 6.779692823s
Oct  2 13:42:33.427: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Running", Reason="", readiness=true. Elapsed: 8.972052036s
Oct  2 13:42:35.620: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Running", Reason="", readiness=true. Elapsed: 11.164276219s
Oct  2 13:42:37.812: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Running", Reason="", readiness=true. Elapsed: 13.35712142s
Oct  2 13:42:40.007: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Running", Reason="", readiness=true. Elapsed: 15.551739131s
Oct  2 13:42:42.199: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Running", Reason="", readiness=true. Elapsed: 17.743719724s
Oct  2 13:42:44.391: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Running", Reason="", readiness=true. Elapsed: 19.936084999s
Oct  2 13:42:46.584: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Running", Reason="", readiness=true. Elapsed: 22.128980245s
Oct  2 13:42:48.777: INFO: Pod "pod-subpath-test-downwardapi-zcmx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.322108139s
STEP: Saw pod success
Oct  2 13:42:48.778: INFO: Pod "pod-subpath-test-downwardapi-zcmx" satisfied condition "Succeeded or Failed"
Oct  2 13:42:48.970: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-downwardapi-zcmx container test-container-subpath-downwardapi-zcmx: <nil>
STEP: delete the pod
Oct  2 13:42:49.376: INFO: Waiting for pod pod-subpath-test-downwardapi-zcmx to disappear
Oct  2 13:42:49.567: INFO: Pod pod-subpath-test-downwardapi-zcmx no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-zcmx
Oct  2 13:42:49.567: INFO: Deleting pod "pod-subpath-test-downwardapi-zcmx" in namespace "subpath-3510"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:50.151: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:50.152: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 145 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:42:52.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-492" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":11,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":4,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:58.990: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 434 lines ...
• [SLOW TEST:80.060 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":4,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:42:59.366: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 242 lines ...
Oct  2 13:42:11.653: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-qbh6s] to have phase Bound
Oct  2 13:42:11.841: INFO: PersistentVolumeClaim pvc-qbh6s found and phase=Bound (188.124327ms)
STEP: Deleting the previously created pod
Oct  2 13:42:32.840: INFO: Deleting pod "pvc-volume-tester-4zk8f" in namespace "csi-mock-volumes-2756"
Oct  2 13:42:33.035: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4zk8f" to be fully deleted
STEP: Checking CSI driver logs
Oct  2 13:42:41.608: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ec3db3c8-2060-46dc-9e38-2960385f93da/volumes/kubernetes.io~csi/pvc-b43b6dfe-3ca4-4700-8d8f-2b7c9cd5036f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-4zk8f
Oct  2 13:42:41.608: INFO: Deleting pod "pvc-volume-tester-4zk8f" in namespace "csi-mock-volumes-2756"
STEP: Deleting claim pvc-qbh6s
Oct  2 13:42:42.175: INFO: Waiting up to 2m0s for PersistentVolume pvc-b43b6dfe-3ca4-4700-8d8f-2b7c9cd5036f to get deleted
Oct  2 13:42:42.364: INFO: PersistentVolume pvc-b43b6dfe-3ca4-4700-8d8f-2b7c9cd5036f was removed
STEP: Deleting storageclass csi-mock-volumes-2756-sc9lbls
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":3,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:01.656: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
• [SLOW TEST:11.502 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":12,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:04.167: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 55 lines ...
• [SLOW TEST:18.008 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 5 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Oct  2 13:43:02.113: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  2 13:43:02.113: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-zlc5
STEP: Creating a pod to test exec-volume-test
Oct  2 13:43:02.316: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-zlc5" in namespace "volume-4235" to be "Succeeded or Failed"
Oct  2 13:43:02.513: INFO: Pod "exec-volume-test-inlinevolume-zlc5": Phase="Pending", Reason="", readiness=false. Elapsed: 196.641644ms
Oct  2 13:43:04.708: INFO: Pod "exec-volume-test-inlinevolume-zlc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39207883s
Oct  2 13:43:06.900: INFO: Pod "exec-volume-test-inlinevolume-zlc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.583940759s
STEP: Saw pod success
Oct  2 13:43:06.900: INFO: Pod "exec-volume-test-inlinevolume-zlc5" satisfied condition "Succeeded or Failed"
Oct  2 13:43:07.092: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod exec-volume-test-inlinevolume-zlc5 container exec-container-inlinevolume-zlc5: <nil>
STEP: delete the pod
Oct  2 13:43:07.497: INFO: Waiting for pod exec-volume-test-inlinevolume-zlc5 to disappear
Oct  2 13:43:07.688: INFO: Pod exec-volume-test-inlinevolume-zlc5 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-zlc5
Oct  2 13:43:07.688: INFO: Deleting pod "exec-volume-test-inlinevolume-zlc5" in namespace "volume-4235"
... skipping 29 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3430
STEP: Creating statefulset with conflicting port in namespace statefulset-3430
STEP: Waiting until pod test-pod will start running in namespace statefulset-3430
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3430
Oct  2 13:42:53.089: INFO: Observed stateful pod in namespace: statefulset-3430, name: ss-0, uid: 10fb3daa-7373-4479-bb4d-775807edaf4c, status phase: Pending. Waiting for statefulset controller to delete.
Oct  2 13:42:53.512: INFO: Observed stateful pod in namespace: statefulset-3430, name: ss-0, uid: 10fb3daa-7373-4479-bb4d-775807edaf4c, status phase: Failed. Waiting for statefulset controller to delete.
Oct  2 13:42:53.518: INFO: Observed stateful pod in namespace: statefulset-3430, name: ss-0, uid: 10fb3daa-7373-4479-bb4d-775807edaf4c, status phase: Failed. Waiting for statefulset controller to delete.
Oct  2 13:42:53.522: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3430
STEP: Removing pod with conflicting port in namespace statefulset-3430
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3430 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
Oct  2 13:42:58.306: INFO: Deleting all statefulset in ns statefulset-3430
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:10.458: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 251 lines ...
Oct  2 13:43:01.967: INFO: Waiting for pod aws-client to disappear
Oct  2 13:43:02.159: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Oct  2 13:43:02.159: INFO: Deleting PersistentVolumeClaim "pvc-z4dbp"
Oct  2 13:43:02.352: INFO: Deleting PersistentVolume "aws-956dj"
Oct  2 13:43:03.505: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-01305b5ce85d8ca2b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01305b5ce85d8ca2b is currently attached to i-04eb7a9acdc53fb9b
	status code: 400, request id: 6cbe64e0-cfa4-4a9f-8499-95593911ebb4
Oct  2 13:43:09.402: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-01305b5ce85d8ca2b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01305b5ce85d8ca2b is currently attached to i-04eb7a9acdc53fb9b
	status code: 400, request id: cd3cb4ae-edd4-4b54-a1ba-ab311131d8d9
Oct  2 13:43:15.300: INFO: Successfully deleted PD "aws://ap-southeast-2a/vol-01305b5ce85d8ca2b".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:15.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8385" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":14,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":4,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 102 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":4,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:10.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct  2 13:43:11.680: INFO: Waiting up to 5m0s for pod "pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21" in namespace "emptydir-2921" to be "Succeeded or Failed"
Oct  2 13:43:11.874: INFO: Pod "pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21": Phase="Pending", Reason="", readiness=false. Elapsed: 193.384588ms
Oct  2 13:43:14.085: INFO: Pod "pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40494518s
Oct  2 13:43:16.279: INFO: Pod "pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.598805668s
Oct  2 13:43:18.473: INFO: Pod "pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.792670712s
STEP: Saw pod success
Oct  2 13:43:18.473: INFO: Pod "pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21" satisfied condition "Succeeded or Failed"
Oct  2 13:43:18.666: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21 container test-container: <nil>
STEP: delete the pod
Oct  2 13:43:19.058: INFO: Waiting for pod pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21 to disappear
Oct  2 13:43:19.251: INFO: Pod pod-f10f15e0-2a51-4cda-badf-2fca7d2fce21 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.145 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:19.662: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Oct  2 13:43:18.109: INFO: Waiting up to 5m0s for pod "pod-662d663e-25d6-4dbd-8365-bfb00ebb4dd6" in namespace "emptydir-943" to be "Succeeded or Failed"
Oct  2 13:43:18.299: INFO: Pod "pod-662d663e-25d6-4dbd-8365-bfb00ebb4dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 189.388967ms
Oct  2 13:43:20.493: INFO: Pod "pod-662d663e-25d6-4dbd-8365-bfb00ebb4dd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.383959608s
STEP: Saw pod success
Oct  2 13:43:20.493: INFO: Pod "pod-662d663e-25d6-4dbd-8365-bfb00ebb4dd6" satisfied condition "Succeeded or Failed"
Oct  2 13:43:20.685: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-662d663e-25d6-4dbd-8365-bfb00ebb4dd6 container test-container: <nil>
STEP: delete the pod
Oct  2 13:43:21.069: INFO: Waiting for pod pod-662d663e-25d6-4dbd-8365-bfb00ebb4dd6 to disappear
Oct  2 13:43:21.258: INFO: Pod pod-662d663e-25d6-4dbd-8365-bfb00ebb4dd6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:21.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-943" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":5,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:21.674: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 60 lines ...
Oct  2 13:43:08.617: INFO: PersistentVolumeClaim pvc-vglgv found but phase is Pending instead of Bound.
Oct  2 13:43:10.828: INFO: PersistentVolumeClaim pvc-vglgv found and phase=Bound (4.591457617s)
Oct  2 13:43:10.828: INFO: Waiting up to 3m0s for PersistentVolume local-w9vbr to have phase Bound
Oct  2 13:43:11.028: INFO: PersistentVolume local-w9vbr found and phase=Bound (200.162699ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rx4l
STEP: Creating a pod to test subpath
Oct  2 13:43:11.635: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rx4l" in namespace "provisioning-2547" to be "Succeeded or Failed"
Oct  2 13:43:11.826: INFO: Pod "pod-subpath-test-preprovisionedpv-rx4l": Phase="Pending", Reason="", readiness=false. Elapsed: 190.147146ms
Oct  2 13:43:14.020: INFO: Pod "pod-subpath-test-preprovisionedpv-rx4l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384353425s
Oct  2 13:43:16.211: INFO: Pod "pod-subpath-test-preprovisionedpv-rx4l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575650619s
Oct  2 13:43:18.402: INFO: Pod "pod-subpath-test-preprovisionedpv-rx4l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.766738417s
Oct  2 13:43:20.596: INFO: Pod "pod-subpath-test-preprovisionedpv-rx4l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.96046044s
STEP: Saw pod success
Oct  2 13:43:20.596: INFO: Pod "pod-subpath-test-preprovisionedpv-rx4l" satisfied condition "Succeeded or Failed"
Oct  2 13:43:20.787: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-rx4l container test-container-subpath-preprovisionedpv-rx4l: <nil>
STEP: delete the pod
Oct  2 13:43:21.177: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rx4l to disappear
Oct  2 13:43:21.367: INFO: Pod pod-subpath-test-preprovisionedpv-rx4l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rx4l
Oct  2 13:43:21.367: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rx4l" in namespace "provisioning-2547"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":40,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a read only busybox container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:26.041: INFO: Driver local doesn't support ext3 -- skipping
... skipping 14 lines ...
      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":7,"skipped":83,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:01.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should not deadlock when a pod's predecessor fails
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:250
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":9,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:31.472: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Oct  2 13:43:23.268: INFO: PersistentVolumeClaim pvc-5qqkt found but phase is Pending instead of Bound.
Oct  2 13:43:25.459: INFO: PersistentVolumeClaim pvc-5qqkt found and phase=Bound (8.950720918s)
Oct  2 13:43:25.459: INFO: Waiting up to 3m0s for PersistentVolume local-gzp2g to have phase Bound
Oct  2 13:43:25.649: INFO: PersistentVolume local-gzp2g found and phase=Bound (189.439566ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-47cz
STEP: Creating a pod to test subpath
Oct  2 13:43:26.219: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-47cz" in namespace "provisioning-6016" to be "Succeeded or Failed"
Oct  2 13:43:26.409: INFO: Pod "pod-subpath-test-preprovisionedpv-47cz": Phase="Pending", Reason="", readiness=false. Elapsed: 189.686357ms
Oct  2 13:43:28.599: INFO: Pod "pod-subpath-test-preprovisionedpv-47cz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380436137s
Oct  2 13:43:30.790: INFO: Pod "pod-subpath-test-preprovisionedpv-47cz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.570898474s
STEP: Saw pod success
Oct  2 13:43:30.790: INFO: Pod "pod-subpath-test-preprovisionedpv-47cz" satisfied condition "Succeeded or Failed"
Oct  2 13:43:30.981: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-47cz container test-container-subpath-preprovisionedpv-47cz: <nil>
STEP: delete the pod
Oct  2 13:43:31.383: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-47cz to disappear
Oct  2 13:43:31.573: INFO: Pod pod-subpath-test-preprovisionedpv-47cz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-47cz
Oct  2 13:43:31.573: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-47cz" in namespace "provisioning-6016"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:35.553: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:41:42.950: INFO: >>> kubeConfig: /root/.kube/config
... skipping 69 lines ...
Oct  2 13:43:16.000: INFO: Waiting for pod aws-client to disappear
Oct  2 13:43:16.190: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Oct  2 13:43:16.190: INFO: Deleting PersistentVolumeClaim "pvc-rc6fp"
Oct  2 13:43:16.384: INFO: Deleting PersistentVolume "aws-vjmtr"
Oct  2 13:43:17.468: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-04564575d53cb7c3e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-04564575d53cb7c3e is currently attached to i-04eb7a9acdc53fb9b
	status code: 400, request id: b7ce8118-7c08-4d91-8929-d2e8c46e85fa
Oct  2 13:43:23.386: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-04564575d53cb7c3e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-04564575d53cb7c3e is currently attached to i-04eb7a9acdc53fb9b
	status code: 400, request id: c689c1e5-af1e-4397-98d0-b85b50c0028e
Oct  2 13:43:29.292: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-04564575d53cb7c3e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-04564575d53cb7c3e is currently attached to i-04eb7a9acdc53fb9b
	status code: 400, request id: 6f0bfd77-05f4-4df3-8232-66011072f905
Oct  2 13:43:35.210: INFO: Successfully deleted PD "aws://ap-southeast-2a/vol-04564575d53cb7c3e".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:35.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1926" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":3,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:35.605: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Oct  2 13:43:19.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Oct  2 13:43:20.625: INFO: Waiting up to 5m0s for pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0" in namespace "svcaccounts-6767" to be "Succeeded or Failed"
Oct  2 13:43:20.816: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Pending", Reason="", readiness=false. Elapsed: 191.229421ms
Oct  2 13:43:23.012: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387718081s
Oct  2 13:43:25.207: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.581948895s
STEP: Saw pod success
Oct  2 13:43:25.207: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0" satisfied condition "Succeeded or Failed"
Oct  2 13:43:25.398: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:43:25.801: INFO: Waiting for pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 to disappear
Oct  2 13:43:25.992: INFO: Pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 no longer exists
STEP: Creating a pod to test service account token: 
Oct  2 13:43:26.185: INFO: Waiting up to 5m0s for pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0" in namespace "svcaccounts-6767" to be "Succeeded or Failed"
Oct  2 13:43:26.378: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Pending", Reason="", readiness=false. Elapsed: 192.321914ms
Oct  2 13:43:28.570: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384860487s
Oct  2 13:43:30.763: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.577254846s
STEP: Saw pod success
Oct  2 13:43:30.763: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0" satisfied condition "Succeeded or Failed"
Oct  2 13:43:30.961: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:43:31.355: INFO: Waiting for pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 to disappear
Oct  2 13:43:31.548: INFO: Pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 no longer exists
STEP: Creating a pod to test service account token: 
Oct  2 13:43:31.744: INFO: Waiting up to 5m0s for pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0" in namespace "svcaccounts-6767" to be "Succeeded or Failed"
Oct  2 13:43:31.940: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Pending", Reason="", readiness=false. Elapsed: 195.264888ms
Oct  2 13:43:34.132: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Running", Reason="", readiness=true. Elapsed: 2.388011783s
Oct  2 13:43:36.329: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.584298908s
STEP: Saw pod success
Oct  2 13:43:36.329: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0" satisfied condition "Succeeded or Failed"
Oct  2 13:43:36.520: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:43:36.943: INFO: Waiting for pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 to disappear
Oct  2 13:43:37.142: INFO: Pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 no longer exists
STEP: Creating a pod to test service account token: 
Oct  2 13:43:37.360: INFO: Waiting up to 5m0s for pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0" in namespace "svcaccounts-6767" to be "Succeeded or Failed"
Oct  2 13:43:37.555: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Pending", Reason="", readiness=false. Elapsed: 194.69025ms
Oct  2 13:43:39.748: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.387383403s
STEP: Saw pod success
Oct  2 13:43:39.748: INFO: Pod "test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0" satisfied condition "Succeeded or Failed"
Oct  2 13:43:39.939: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:43:40.346: INFO: Waiting for pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 to disappear
Oct  2 13:43:40.546: INFO: Pod test-pod-e5e6d765-1617-4b4d-bf07-aecf4470fad0 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:21.471 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":36,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:40.996: INFO: Only supported for providers [gce gke] (not aws)
... skipping 95 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 40 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":75,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:24.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 57 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":14,"skipped":75,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:41.515: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 39 lines ...
• [SLOW TEST:6.061 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Deployment should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":8,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:41.667: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:15.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:43.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2367" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":9,"skipped":51,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:43.924: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:44.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-330" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":15,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:69.527 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:47.886 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:46.918: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:46.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-4964" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":6,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 165 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":7,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:47.459: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
Oct  2 13:43:37.598: INFO: PersistentVolumeClaim pvc-flzdq found but phase is Pending instead of Bound.
Oct  2 13:43:39.789: INFO: PersistentVolumeClaim pvc-flzdq found and phase=Bound (4.575788775s)
Oct  2 13:43:39.789: INFO: Waiting up to 3m0s for PersistentVolume local-r9t98 to have phase Bound
Oct  2 13:43:39.980: INFO: PersistentVolume local-r9t98 found and phase=Bound (191.264087ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sbb9
STEP: Creating a pod to test subpath
Oct  2 13:43:40.565: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sbb9" in namespace "provisioning-1556" to be "Succeeded or Failed"
Oct  2 13:43:40.758: INFO: Pod "pod-subpath-test-preprovisionedpv-sbb9": Phase="Pending", Reason="", readiness=false. Elapsed: 193.097579ms
Oct  2 13:43:42.959: INFO: Pod "pod-subpath-test-preprovisionedpv-sbb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39336419s
Oct  2 13:43:45.152: INFO: Pod "pod-subpath-test-preprovisionedpv-sbb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.586510248s
STEP: Saw pod success
Oct  2 13:43:45.152: INFO: Pod "pod-subpath-test-preprovisionedpv-sbb9" satisfied condition "Succeeded or Failed"
Oct  2 13:43:45.343: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-sbb9 container test-container-subpath-preprovisionedpv-sbb9: <nil>
STEP: delete the pod
Oct  2 13:43:45.735: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sbb9 to disappear
Oct  2 13:43:45.926: INFO: Pod pod-subpath-test-preprovisionedpv-sbb9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sbb9
Oct  2 13:43:45.926: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sbb9" in namespace "provisioning-1556"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":87,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:48.717: INFO: Only supported for providers [vsphere] (not aws)
... skipping 125 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:08.280: INFO: >>> kubeConfig: /root/.kube/config
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":4,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] EndpointSliceMirroring
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:49.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-9855" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:50.083: INFO: Only supported for providers [gce gke] (not aws)
... skipping 83 lines ...
• [SLOW TEST:5.058 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":9,"skipped":78,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:52.081: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 92 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Oct  2 13:43:48.631: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7488" to be "Succeeded or Failed"
Oct  2 13:43:48.822: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 190.468282ms
Oct  2 13:43:51.013: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.381952726s
STEP: Saw pod success
Oct  2 13:43:51.013: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  2 13:43:51.202: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct  2 13:43:51.588: INFO: Waiting for pod pod-host-path-test to disappear
Oct  2 13:43:51.777: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:51.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7488" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":8,"skipped":33,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:52.183: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 50 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":10,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:52.898: INFO: Only supported for providers [vsphere] (not aws)
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:53.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1420" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":10,"skipped":103,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:53.591: INFO: Only supported for providers [vsphere] (not aws)
... skipping 162 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 33 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 50 lines ...
• [SLOW TEST:6.478 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:47.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Oct  2 13:43:48.442: INFO: Waiting up to 5m0s for pod "var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e" in namespace "var-expansion-7273" to be "Succeeded or Failed"
Oct  2 13:43:48.635: INFO: Pod "var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e": Phase="Pending", Reason="", readiness=false. Elapsed: 192.147002ms
Oct  2 13:43:50.837: INFO: Pod "var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394666241s
Oct  2 13:43:53.028: INFO: Pod "var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.586049484s
Oct  2 13:43:55.299: INFO: Pod "var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.856582559s
STEP: Saw pod success
Oct  2 13:43:55.299: INFO: Pod "var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e" satisfied condition "Succeeded or Failed"
Oct  2 13:43:55.490: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e container dapi-container: <nil>
STEP: delete the pod
Oct  2 13:43:55.892: INFO: Waiting for pod var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e to disappear
Oct  2 13:43:56.088: INFO: Pod var-expansion-6c0bbedd-10bf-4357-aca5-8a7e8332ea9e no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.177 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:56.485: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 303 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:43:58.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3045" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":8,"skipped":130,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:13.577 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":16,"skipped":93,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:43:58.851: INFO: Driver local doesn't support ext4 -- skipping
... skipping 161 lines ...
• [SLOW TEST:18.808 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":6,"skipped":70,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:00.021: INFO: Only supported for providers [gce gke] (not aws)
... skipping 76 lines ...
Oct  2 13:42:52.665: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathdjtcs] to have phase Bound
Oct  2 13:42:52.855: INFO: PersistentVolumeClaim csi-hostpathdjtcs found but phase is Pending instead of Bound.
Oct  2 13:42:55.045: INFO: PersistentVolumeClaim csi-hostpathdjtcs found but phase is Pending instead of Bound.
Oct  2 13:42:57.234: INFO: PersistentVolumeClaim csi-hostpathdjtcs found and phase=Bound (4.568612639s)
STEP: Creating pod pod-subpath-test-dynamicpv-2c69
STEP: Creating a pod to test subpath
Oct  2 13:42:57.825: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2c69" in namespace "provisioning-3514" to be "Succeeded or Failed"
Oct  2 13:42:58.015: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 189.963611ms
Oct  2 13:43:00.229: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404351708s
Oct  2 13:43:02.435: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.610494815s
Oct  2 13:43:04.626: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.800654726s
Oct  2 13:43:06.816: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 8.99117281s
Oct  2 13:43:09.007: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 11.181834434s
Oct  2 13:43:11.201: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 13.375895595s
Oct  2 13:43:13.393: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 15.567599041s
Oct  2 13:43:15.584: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Pending", Reason="", readiness=false. Elapsed: 17.75945238s
Oct  2 13:43:17.775: INFO: Pod "pod-subpath-test-dynamicpv-2c69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.950516655s
STEP: Saw pod success
Oct  2 13:43:17.776: INFO: Pod "pod-subpath-test-dynamicpv-2c69" satisfied condition "Succeeded or Failed"
Oct  2 13:43:17.976: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-2c69 container test-container-volume-dynamicpv-2c69: <nil>
STEP: delete the pod
Oct  2 13:43:18.375: INFO: Waiting for pod pod-subpath-test-dynamicpv-2c69 to disappear
Oct  2 13:43:18.565: INFO: Pod pod-subpath-test-dynamicpv-2c69 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-2c69
Oct  2 13:43:18.565: INFO: Deleting pod "pod-subpath-test-dynamicpv-2c69" in namespace "provisioning-3514"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:53.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:44:01.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2963" for this suite.


• [SLOW TEST:8.002 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:01.665: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
• [SLOW TEST:60.352 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:194
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":4,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 44 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 13:44:00.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-faf6f8d0-b048-4a41-866b-703432ffabe3" in namespace "projected-6744" to be "Succeeded or Failed"
Oct  2 13:44:00.277: INFO: Pod "downwardapi-volume-faf6f8d0-b048-4a41-866b-703432ffabe3": Phase="Pending", Reason="", readiness=false. Elapsed: 189.901026ms
Oct  2 13:44:02.467: INFO: Pod "downwardapi-volume-faf6f8d0-b048-4a41-866b-703432ffabe3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379672228s
Oct  2 13:44:04.657: INFO: Pod "downwardapi-volume-faf6f8d0-b048-4a41-866b-703432ffabe3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.569120008s
STEP: Saw pod success
Oct  2 13:44:04.657: INFO: Pod "downwardapi-volume-faf6f8d0-b048-4a41-866b-703432ffabe3" satisfied condition "Succeeded or Failed"
Oct  2 13:44:04.849: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod downwardapi-volume-faf6f8d0-b048-4a41-866b-703432ffabe3 container client-container: <nil>
STEP: delete the pod
Oct  2 13:44:05.265: INFO: Waiting for pod downwardapi-volume-faf6f8d0-b048-4a41-866b-703432ffabe3 to disappear
Oct  2 13:44:05.454: INFO: Pod downwardapi-volume-faf6f8d0-b048-4a41-866b-703432ffabe3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.891 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:110.885 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:283
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":9,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  2 13:43:41.530: INFO: File wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local from pod  dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  2 13:43:41.720: INFO: File jessie_udp@dns-test-service-3.dns-1986.svc.cluster.local from pod  dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  2 13:43:41.720: INFO: Lookups using dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 failed for: [wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local jessie_udp@dns-test-service-3.dns-1986.svc.cluster.local]

Oct  2 13:43:46.911: INFO: File wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local from pod  dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  2 13:43:47.101: INFO: File jessie_udp@dns-test-service-3.dns-1986.svc.cluster.local from pod  dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  2 13:43:47.101: INFO: Lookups using dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 failed for: [wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local jessie_udp@dns-test-service-3.dns-1986.svc.cluster.local]

Oct  2 13:43:51.917: INFO: File wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local from pod  dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  2 13:43:52.106: INFO: File jessie_udp@dns-test-service-3.dns-1986.svc.cluster.local from pod  dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  2 13:43:52.107: INFO: Lookups using dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 failed for: [wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local jessie_udp@dns-test-service-3.dns-1986.svc.cluster.local]

Oct  2 13:43:56.914: INFO: File wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local from pod  dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  2 13:43:57.105: INFO: Lookups using dns-1986/dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 failed for: [wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local]

Oct  2 13:44:02.101: INFO: DNS probes using dns-test-4bfc939d-3cc5-4c11-ade0-b5b2b3184539 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1986.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1986.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
• [SLOW TEST:45.126 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:06.842: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
• [SLOW TEST:8.585 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  2 13:44:07.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-044e1a97-61f8-41f8-ae5f-6b4a5fd6ad18" in namespace "projected-5868" to be "Succeeded or Failed"
Oct  2 13:44:07.201: INFO: Pod "downwardapi-volume-044e1a97-61f8-41f8-ae5f-6b4a5fd6ad18": Phase="Pending", Reason="", readiness=false. Elapsed: 194.003235ms
Oct  2 13:44:09.394: INFO: Pod "downwardapi-volume-044e1a97-61f8-41f8-ae5f-6b4a5fd6ad18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.386897618s
STEP: Saw pod success
Oct  2 13:44:09.394: INFO: Pod "downwardapi-volume-044e1a97-61f8-41f8-ae5f-6b4a5fd6ad18" satisfied condition "Succeeded or Failed"
Oct  2 13:44:09.583: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod downwardapi-volume-044e1a97-61f8-41f8-ae5f-6b4a5fd6ad18 container client-container: <nil>
STEP: delete the pod
Oct  2 13:44:09.971: INFO: Waiting for pod downwardapi-volume-044e1a97-61f8-41f8-ae5f-6b4a5fd6ad18 to disappear
Oct  2 13:44:10.160: INFO: Pod downwardapi-volume-044e1a97-61f8-41f8-ae5f-6b4a5fd6ad18 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:44:10.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5868" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":108,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:10.576: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-6116d9a3-b2a0-44f3-838c-cc92f3957ba8
STEP: Creating a pod to test consume configMaps
Oct  2 13:44:08.228: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c3d5b2df-61b2-47ac-bcae-68b28641138b" in namespace "projected-6948" to be "Succeeded or Failed"
Oct  2 13:44:08.417: INFO: Pod "pod-projected-configmaps-c3d5b2df-61b2-47ac-bcae-68b28641138b": Phase="Pending", Reason="", readiness=false. Elapsed: 188.97365ms
Oct  2 13:44:10.615: INFO: Pod "pod-projected-configmaps-c3d5b2df-61b2-47ac-bcae-68b28641138b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.386754861s
STEP: Saw pod success
Oct  2 13:44:10.615: INFO: Pod "pod-projected-configmaps-c3d5b2df-61b2-47ac-bcae-68b28641138b" satisfied condition "Succeeded or Failed"
Oct  2 13:44:10.806: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-projected-configmaps-c3d5b2df-61b2-47ac-bcae-68b28641138b container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:44:11.196: INFO: Waiting for pod pod-projected-configmaps-c3d5b2df-61b2-47ac-bcae-68b28641138b to disappear
Oct  2 13:44:11.388: INFO: Pod pod-projected-configmaps-c3d5b2df-61b2-47ac-bcae-68b28641138b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:44:11.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6948" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Oct  2 13:43:27.036: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-873pz2kh
STEP: creating a claim
Oct  2 13:43:27.230: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-ghkg
STEP: Creating a pod to test subpath
Oct  2 13:43:27.815: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ghkg" in namespace "provisioning-873" to be "Succeeded or Failed"
Oct  2 13:43:28.008: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 193.380448ms
Oct  2 13:43:30.203: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387423195s
Oct  2 13:43:32.396: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580854628s
Oct  2 13:43:34.590: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.775309956s
Oct  2 13:43:36.819: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 9.003489319s
Oct  2 13:43:39.013: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 11.19793931s
Oct  2 13:43:41.207: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 13.39160596s
Oct  2 13:43:43.403: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 15.587691116s
Oct  2 13:43:45.597: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 17.782158967s
Oct  2 13:43:47.791: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Pending", Reason="", readiness=false. Elapsed: 19.975679174s
Oct  2 13:43:49.985: INFO: Pod "pod-subpath-test-dynamicpv-ghkg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.169987787s
STEP: Saw pod success
Oct  2 13:43:49.985: INFO: Pod "pod-subpath-test-dynamicpv-ghkg" satisfied condition "Succeeded or Failed"
Oct  2 13:43:50.177: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-ghkg container test-container-subpath-dynamicpv-ghkg: <nil>
STEP: delete the pod
Oct  2 13:43:50.572: INFO: Waiting for pod pod-subpath-test-dynamicpv-ghkg to disappear
Oct  2 13:43:50.765: INFO: Pod pod-subpath-test-dynamicpv-ghkg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ghkg
Oct  2 13:43:50.765: INFO: Deleting pod "pod-subpath-test-dynamicpv-ghkg" in namespace "provisioning-873"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:13.322: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 161 lines ...
• [SLOW TEST:22.309 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":9,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:14.576: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 72 lines ...
• [SLOW TEST:13.798 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:55
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":5,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:17.198: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 19 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:44:14.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Oct  2 13:44:15.802: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-490" to be "Succeeded or Failed"
Oct  2 13:44:15.995: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 192.681616ms
Oct  2 13:44:18.196: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393686134s
Oct  2 13:44:20.390: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.587898245s
Oct  2 13:44:20.390: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:44:20.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-490" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:245.667 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:21.000: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:44:21.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":3,"skipped":29,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
• [SLOW TEST:14.927 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":9,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:28.350: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 60 lines ...
Oct  2 13:44:22.339: INFO: PersistentVolumeClaim pvc-vhvp9 found but phase is Pending instead of Bound.
Oct  2 13:44:24.528: INFO: PersistentVolumeClaim pvc-vhvp9 found and phase=Bound (4.569246636s)
Oct  2 13:44:24.529: INFO: Waiting up to 3m0s for PersistentVolume local-bvpfp to have phase Bound
Oct  2 13:44:24.718: INFO: PersistentVolume local-bvpfp found and phase=Bound (189.214116ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gqf9
STEP: Creating a pod to test subpath
Oct  2 13:44:25.289: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gqf9" in namespace "provisioning-6350" to be "Succeeded or Failed"
Oct  2 13:44:25.479: INFO: Pod "pod-subpath-test-preprovisionedpv-gqf9": Phase="Pending", Reason="", readiness=false. Elapsed: 189.670016ms
Oct  2 13:44:27.668: INFO: Pod "pod-subpath-test-preprovisionedpv-gqf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379227991s
Oct  2 13:44:29.859: INFO: Pod "pod-subpath-test-preprovisionedpv-gqf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.569776575s
STEP: Saw pod success
Oct  2 13:44:29.859: INFO: Pod "pod-subpath-test-preprovisionedpv-gqf9" satisfied condition "Succeeded or Failed"
Oct  2 13:44:30.048: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-gqf9 container test-container-subpath-preprovisionedpv-gqf9: <nil>
STEP: delete the pod
Oct  2 13:44:30.442: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gqf9 to disappear
Oct  2 13:44:30.635: INFO: Pod pod-subpath-test-preprovisionedpv-gqf9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gqf9
Oct  2 13:44:30.635: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gqf9" in namespace "provisioning-6350"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:33.234: INFO: Only supported for providers [gce gke] (not aws)
... skipping 172 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:35.076: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 69 lines ...
• [SLOW TEST:64.147 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":10,"skipped":39,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:35.677: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 110 lines ...
Oct  2 13:43:43.692: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5177cxwgh
STEP: creating a claim
Oct  2 13:43:43.892: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-ldnt
STEP: Creating a pod to test subpath
Oct  2 13:43:44.466: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ldnt" in namespace "provisioning-5177" to be "Succeeded or Failed"
Oct  2 13:43:44.656: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 189.118274ms
Oct  2 13:43:46.847: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380653571s
Oct  2 13:43:49.038: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571665783s
Oct  2 13:43:51.228: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761913149s
Oct  2 13:43:53.421: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.954418604s
Oct  2 13:43:55.615: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.148075825s
Oct  2 13:43:57.805: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.338738467s
Oct  2 13:43:59.998: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 15.531415724s
Oct  2 13:44:02.187: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 17.720870285s
Oct  2 13:44:04.382: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.915625916s
STEP: Saw pod success
Oct  2 13:44:04.382: INFO: Pod "pod-subpath-test-dynamicpv-ldnt" satisfied condition "Succeeded or Failed"
Oct  2 13:44:04.571: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-ldnt container test-container-subpath-dynamicpv-ldnt: <nil>
STEP: delete the pod
Oct  2 13:44:04.960: INFO: Waiting for pod pod-subpath-test-dynamicpv-ldnt to disappear
Oct  2 13:44:05.150: INFO: Pod pod-subpath-test-dynamicpv-ldnt no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ldnt
Oct  2 13:44:05.150: INFO: Deleting pod "pod-subpath-test-dynamicpv-ldnt" in namespace "provisioning-5177"
STEP: Creating pod pod-subpath-test-dynamicpv-ldnt
STEP: Creating a pod to test subpath
Oct  2 13:44:05.531: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ldnt" in namespace "provisioning-5177" to be "Succeeded or Failed"
Oct  2 13:44:05.721: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 189.653063ms
Oct  2 13:44:07.912: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380969188s
Oct  2 13:44:10.103: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571651211s
Oct  2 13:44:12.293: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761865218s
Oct  2 13:44:14.482: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.951511217s
Oct  2 13:44:16.673: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.142464488s
Oct  2 13:44:18.869: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.337807977s
Oct  2 13:44:21.059: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 15.527868316s
Oct  2 13:44:23.248: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 17.717442985s
Oct  2 13:44:25.439: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Pending", Reason="", readiness=false. Elapsed: 19.908120113s
Oct  2 13:44:27.629: INFO: Pod "pod-subpath-test-dynamicpv-ldnt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.098506749s
STEP: Saw pod success
Oct  2 13:44:27.630: INFO: Pod "pod-subpath-test-dynamicpv-ldnt" satisfied condition "Succeeded or Failed"
Oct  2 13:44:27.819: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-ldnt container test-container-subpath-dynamicpv-ldnt: <nil>
STEP: delete the pod
Oct  2 13:44:28.205: INFO: Waiting for pod pod-subpath-test-dynamicpv-ldnt to disappear
Oct  2 13:44:28.394: INFO: Pod pod-subpath-test-dynamicpv-ldnt no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ldnt
Oct  2 13:44:28.394: INFO: Deleting pod "pod-subpath-test-dynamicpv-ldnt" in namespace "provisioning-5177"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:40.506: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:44:13.821: INFO: >>> kubeConfig: /root/.kube/config
... skipping 114 lines ...
• [SLOW TEST:52.647 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:244
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":11,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:45.611: INFO: Only supported for providers [gce gke] (not aws)
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-64gl
STEP: Creating a pod to test atomic-volume-subpath
Oct  2 13:44:18.802: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-64gl" in namespace "subpath-1793" to be "Succeeded or Failed"
Oct  2 13:44:18.990: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Pending", Reason="", readiness=false. Elapsed: 187.974186ms
Oct  2 13:44:21.180: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 2.377451552s
Oct  2 13:44:23.369: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 4.567183846s
Oct  2 13:44:25.559: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 6.756838463s
Oct  2 13:44:27.748: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 8.945882677s
Oct  2 13:44:29.936: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 11.134206708s
... skipping 2 lines ...
Oct  2 13:44:36.505: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 17.703205111s
Oct  2 13:44:38.695: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 19.892334348s
Oct  2 13:44:40.887: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 22.085145414s
Oct  2 13:44:43.083: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Running", Reason="", readiness=true. Elapsed: 24.280871441s
Oct  2 13:44:45.272: INFO: Pod "pod-subpath-test-secret-64gl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.47018464s
STEP: Saw pod success
Oct  2 13:44:45.273: INFO: Pod "pod-subpath-test-secret-64gl" satisfied condition "Succeeded or Failed"
Oct  2 13:44:45.461: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod pod-subpath-test-secret-64gl container test-container-subpath-secret-64gl: <nil>
STEP: delete the pod
Oct  2 13:44:45.848: INFO: Waiting for pod pod-subpath-test-secret-64gl to disappear
Oct  2 13:44:46.036: INFO: Pod pod-subpath-test-secret-64gl no longer exists
STEP: Deleting pod pod-subpath-test-secret-64gl
Oct  2 13:44:46.036: INFO: Deleting pod "pod-subpath-test-secret-64gl" in namespace "subpath-1793"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:46.617: INFO: Driver emptydir doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 26 lines ...
Oct  2 13:44:33.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct  2 13:44:34.399: INFO: Waiting up to 5m0s for pod "pod-48910a44-b9fe-478e-b00a-c11cab1d22f2" in namespace "emptydir-6246" to be "Succeeded or Failed"
Oct  2 13:44:34.590: INFO: Pod "pod-48910a44-b9fe-478e-b00a-c11cab1d22f2": Phase="Pending", Reason="", readiness=false. Elapsed: 191.375408ms
Oct  2 13:44:36.780: INFO: Pod "pod-48910a44-b9fe-478e-b00a-c11cab1d22f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381399218s
Oct  2 13:44:38.971: INFO: Pod "pod-48910a44-b9fe-478e-b00a-c11cab1d22f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571593418s
Oct  2 13:44:41.161: INFO: Pod "pod-48910a44-b9fe-478e-b00a-c11cab1d22f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762316826s
Oct  2 13:44:43.353: INFO: Pod "pod-48910a44-b9fe-478e-b00a-c11cab1d22f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.95443603s
Oct  2 13:44:45.543: INFO: Pod "pod-48910a44-b9fe-478e-b00a-c11cab1d22f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.144351306s
STEP: Saw pod success
Oct  2 13:44:45.543: INFO: Pod "pod-48910a44-b9fe-478e-b00a-c11cab1d22f2" satisfied condition "Succeeded or Failed"
Oct  2 13:44:45.733: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-48910a44-b9fe-478e-b00a-c11cab1d22f2 container test-container: <nil>
STEP: delete the pod
Oct  2 13:44:46.116: INFO: Waiting for pod pod-48910a44-b9fe-478e-b00a-c11cab1d22f2 to disappear
Oct  2 13:44:46.305: INFO: Pod pod-48910a44-b9fe-478e-b00a-c11cab1d22f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity unused
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":6,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:47.004: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":115,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:48.708: INFO: Only supported for providers [azure] (not aws)
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":36,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":5,"skipped":8,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:43:13.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 104 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":6,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:49.230: INFO: Only supported for providers [gce gke] (not aws)
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:44:48.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5369" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:49.370: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
Oct  2 13:44:49.441: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct  2 13:44:49.441: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8252 describe pod agnhost-primary-28jtx'
Oct  2 13:44:50.503: INFO: stderr: ""
Oct  2 13:44:50.504: INFO: stdout: "Name:         agnhost-primary-28jtx\nNamespace:    kubectl-8252\nPriority:     0\nNode:         ip-172-20-49-155.ap-southeast-2.compute.internal/172.20.49.155\nStart Time:   Sat, 02 Oct 2021 13:44:40 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  cni.projectcalico.org/podIP: 100.96.2.82/32\n              cni.projectcalico.org/podIPs: 100.96.2.82/32\nStatus:       Running\nIP:           100.96.2.82\nIPs:\n  IP:           100.96.2.82\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://a6272cc596257e7e0d393c9e3159fbb9db370086f6b5ce6ea8bd62f2793873f4\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 02 Oct 2021 13:44:42 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2r8c6 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-2r8c6:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  10s   default-scheduler  Successfully assigned kubectl-8252/agnhost-primary-28jtx to ip-172-20-49-155.ap-southeast-2.compute.internal\n  Normal  Pulled     8s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    8s    kubelet            Created container agnhost-primary\n  Normal  Started    8s    kubelet            Started container agnhost-primary\n"
Oct  2 13:44:50.504: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8252 describe rc agnhost-primary'
Oct  2 13:44:51.788: INFO: stderr: ""
Oct  2 13:44:51.788: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-8252\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  11s   replication-controller  Created pod: agnhost-primary-28jtx\n"
Oct  2 13:44:51.788: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8252 describe service agnhost-primary'
Oct  2 13:44:53.023: INFO: stderr: ""
Oct  2 13:44:53.023: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-8252\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.71.107.34\nIPs:               100.71.107.34\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.2.82:6379\nSession Affinity:  None\nEvents:            <none>\n"
Oct  2 13:44:53.219: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8252 describe node ip-172-20-33-188.ap-southeast-2.compute.internal'
Oct  2 13:44:55.262: INFO: stderr: ""
Oct  2 13:44:55.262: INFO: stdout: "Name:               ip-172-20-33-188.ap-southeast-2.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.large\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=ap-southeast-2\n                    failure-domain.beta.kubernetes.io/zone=ap-southeast-2a\n                    kops.k8s.io/instancegroup=nodes-ap-southeast-2a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-33-188.ap-southeast-2.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.large\n                    topology.hostpath.csi/node=ip-172-20-33-188.ap-southeast-2.compute.internal\n                    topology.kubernetes.io/region=ap-southeast-2\n                    topology.kubernetes.io/zone=ap-southeast-2a\nAnnotations:        flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"8e:8b:37:4a:92:c7\"}\n                    flannel.alpha.coreos.com/backend-type: vxlan\n                    flannel.alpha.coreos.com/kube-subnet-manager: true\n                    flannel.alpha.coreos.com/public-ip: 172.20.33.188\n                    node.alpha.kubernetes.io/ttl: 0\n                    projectcalico.org/IPv4IPIPTunnelAddr: 100.96.1.1\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 02 Oct 2021 13:36:28 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-33-188.ap-southeast-2.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 02 Oct 2021 13:44:49 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 02 Oct 2021 13:44:29 +0000   Sat, 02 Oct 2021 13:36:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 02 Oct 2021 13:44:29 +0000   Sat, 02 Oct 2021 13:36:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 02 Oct 2021 13:44:29 +0000   Sat, 02 Oct 2021 13:36:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 02 Oct 2021 13:44:29 +0000   Sat, 02 Oct 2021 13:36:48 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   172.20.33.188\n  ExternalIP:   3.106.56.47\n  Hostname:     ip-172-20-33-188.ap-southeast-2.compute.internal\n  InternalDNS:  ip-172-20-33-188.ap-southeast-2.compute.internal\n  ExternalDNS:  ec2-3-106-56-47.ap-southeast-2.compute.amazonaws.com\nCapacity:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           48725632Ki\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      8044196Ki\n  pods:                        110\nAllocatable:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           44905542377\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      7941796Ki\n  pods:                        110\nSystem Info:\n  Machine ID:                 ec281e6b130f295b38e24eb82a3f6524\n  System UUID:                ec281e6b-130f-295b-38e2-4eb82a3f6524\n  Boot ID:                    9164075c-d87e-443a-9f84-4fc5d274db5d\n  Kernel Version:             5.11.0-1019-aws\n  OS Image:                   Ubuntu 20.04.3 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.10\n  Kubelet Version:            v1.21.5\n  Kube-Proxy Version:         v1.21.5\nPodCIDR:                      100.96.1.0/24\nPodCIDRs:                     100.96.1.0/24\nProviderID:                   aws:///ap-southeast-2a/i-05ed96ed47c12c5c8\nNon-terminated Pods:          (13 in total)\n  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---\n  deployment-6872             test-deployment-7b4c744884-tghnx                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s\n  dns-6278                    dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s\n  kube-system                 canal-5926s                                                        100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m27s\n  kube-system                 coredns-5dc785954d-kt26c                                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     9m46s\n  kube-system                 coredns-autoscaler-84d4cfd89c-bbcmp                                20m (1%)      0 (0%)      10Mi (0%)        0 (0%)         9m46s\n  kube-system                 kube-proxy-ip-172-20-33-188.ap-southeast-2.compute.internal        100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m26s\n  nettest-1498                netserver-0                                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s\n  nettest-1498                test-container-pod                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s\n  provisioning-2328           hostexec-ip-172-20-33-188.ap-southeast-2.compute.internal-ck78r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s\n  provisioning-6593           hostexec-ip-172-20-33-188.ap-southeast-2.compute.internal-76554    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s\n  pv-8406                     pod-ephm-test-projected-4qmr                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s\n  services-8399               execpodkzrx4                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\n  statefulset-1540            ss2-2                                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests    Limits\n  --------                    --------    ------\n  cpu                         320m (16%)  0 (0%)\n  memory                      80Mi (1%)   170Mi (2%)\n  ephemeral-storage           0 (0%)      0 (0%)\n  hugepages-1Gi               0 (0%)      0 (0%)\n  hugepages-2Mi               0 (0%)      0 (0%)\n  attachable-volumes-aws-ebs  0           0\nEvents:\n  Type     Reason                   Age    From        Message\n  ----     ------                   ----   ----        -------\n  Normal   Starting                 8m27s  kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity      8m27s  kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  8m27s  kubelet     Node ip-172-20-33-188.ap-southeast-2.compute.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    8m27s  kubelet     Node ip-172-20-33-188.ap-southeast-2.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     8m27s  kubelet     Node ip-172-20-33-188.ap-southeast-2.compute.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  8m27s  kubelet     Updated Node Allocatable limit across pods\n  Normal   Starting                 8m22s  kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                8m7s   kubelet     Node ip-172-20-33-188.ap-southeast-2.compute.internal status is now: NodeReady\n"
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":3,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:57.113: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 122 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:496
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":3,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:58.421: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 59 lines ...
• [SLOW TEST:23.814 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":11,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:58.949: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 41 lines ...
• [SLOW TEST:13.672 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":12,"skipped":77,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:44:59.335: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 37 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:44:23.283: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Oct  2 13:44:37.582: INFO: PersistentVolumeClaim pvc-6ls7f found but phase is Pending instead of Bound.
Oct  2 13:44:39.772: INFO: PersistentVolumeClaim pvc-6ls7f found and phase=Bound (11.142096635s)
Oct  2 13:44:39.772: INFO: Waiting up to 3m0s for PersistentVolume local-vxsdn to have phase Bound
Oct  2 13:44:39.962: INFO: PersistentVolume local-vxsdn found and phase=Bound (189.811572ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n66z
STEP: Creating a pod to test subpath
Oct  2 13:44:40.532: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n66z" in namespace "provisioning-5050" to be "Succeeded or Failed"
Oct  2 13:44:40.722: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z": Phase="Pending", Reason="", readiness=false. Elapsed: 189.805454ms
Oct  2 13:44:42.912: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380302787s
Oct  2 13:44:45.103: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570954004s
Oct  2 13:44:47.294: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761987657s
Oct  2 13:44:49.486: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.954081004s
Oct  2 13:44:51.677: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z": Phase="Pending", Reason="", readiness=false. Elapsed: 11.14477614s
Oct  2 13:44:53.878: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z": Phase="Pending", Reason="", readiness=false. Elapsed: 13.345624349s
Oct  2 13:44:56.071: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.539069224s
STEP: Saw pod success
Oct  2 13:44:56.071: INFO: Pod "pod-subpath-test-preprovisionedpv-n66z" satisfied condition "Succeeded or Failed"
Oct  2 13:44:56.261: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-n66z container test-container-subpath-preprovisionedpv-n66z: <nil>
STEP: delete the pod
Oct  2 13:44:56.652: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n66z to disappear
Oct  2 13:44:56.841: INFO: Pod pod-subpath-test-preprovisionedpv-n66z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n66z
Oct  2 13:44:56.842: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n66z" in namespace "provisioning-5050"
... skipping 32 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-d93eae9b-1f41-4cfc-ad9c-146e1ae657f6
STEP: Creating a pod to test consume secrets
Oct  2 13:44:50.622: INFO: Waiting up to 5m0s for pod "pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb" in namespace "secrets-7803" to be "Succeeded or Failed"
Oct  2 13:44:50.815: INFO: Pod "pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 193.474841ms
Oct  2 13:44:53.005: INFO: Pod "pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383042349s
Oct  2 13:44:55.194: INFO: Pod "pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572069122s
Oct  2 13:44:57.385: INFO: Pod "pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762643356s
Oct  2 13:44:59.574: INFO: Pod "pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.951625874s
STEP: Saw pod success
Oct  2 13:44:59.574: INFO: Pod "pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb" satisfied condition "Succeeded or Failed"
Oct  2 13:44:59.762: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb container secret-volume-test: <nil>
STEP: delete the pod
Oct  2 13:45:00.220: INFO: Waiting for pod pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb to disappear
Oct  2 13:45:00.415: INFO: Pod pod-secrets-53b2397a-cf96-46e8-b3e0-d8ce8ebdc4eb no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.537 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 82 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":9,"skipped":109,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:03.605: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":7,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:44:02.618: INFO: >>> kubeConfig: /root/.kube/config
... skipping 53 lines ...
Oct  2 13:44:16.298: INFO: PersistentVolumeClaim csi-hostpath86rzl found but phase is Pending instead of Bound.
Oct  2 13:44:18.489: INFO: PersistentVolumeClaim csi-hostpath86rzl found but phase is Pending instead of Bound.
Oct  2 13:44:20.681: INFO: PersistentVolumeClaim csi-hostpath86rzl found but phase is Pending instead of Bound.
Oct  2 13:44:22.873: INFO: PersistentVolumeClaim csi-hostpath86rzl found and phase=Bound (11.152610013s)
STEP: Creating pod pod-subpath-test-dynamicpv-b74j
STEP: Creating a pod to test subpath
Oct  2 13:44:23.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-b74j" in namespace "provisioning-9803" to be "Succeeded or Failed"
Oct  2 13:44:23.643: INFO: Pod "pod-subpath-test-dynamicpv-b74j": Phase="Pending", Reason="", readiness=false. Elapsed: 193.962675ms
Oct  2 13:44:25.835: INFO: Pod "pod-subpath-test-dynamicpv-b74j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386197398s
Oct  2 13:44:28.028: INFO: Pod "pod-subpath-test-dynamicpv-b74j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.578375443s
Oct  2 13:44:30.220: INFO: Pod "pod-subpath-test-dynamicpv-b74j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.770461337s
Oct  2 13:44:32.412: INFO: Pod "pod-subpath-test-dynamicpv-b74j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963096664s
Oct  2 13:44:34.605: INFO: Pod "pod-subpath-test-dynamicpv-b74j": Phase="Pending", Reason="", readiness=false. Elapsed: 11.155857044s
Oct  2 13:44:36.797: INFO: Pod "pod-subpath-test-dynamicpv-b74j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.347995731s
STEP: Saw pod success
Oct  2 13:44:36.797: INFO: Pod "pod-subpath-test-dynamicpv-b74j" satisfied condition "Succeeded or Failed"
Oct  2 13:44:36.991: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-b74j container test-container-subpath-dynamicpv-b74j: <nil>
STEP: delete the pod
Oct  2 13:44:37.387: INFO: Waiting for pod pod-subpath-test-dynamicpv-b74j to disappear
Oct  2 13:44:37.582: INFO: Pod pod-subpath-test-dynamicpv-b74j no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-b74j
Oct  2 13:44:37.582: INFO: Deleting pod "pod-subpath-test-dynamicpv-b74j" in namespace "provisioning-9803"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":78,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":10,"skipped":63,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:44:56.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
• [SLOW TEST:7.617 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:05.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4024" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":10,"skipped":111,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:05.668: INFO: Only supported for providers [azure] (not aws)
... skipping 141 lines ...
• [SLOW TEST:17.902 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":12,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:05.987: INFO: Driver local doesn't support ext4 -- skipping
... skipping 60 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":51,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:45:00.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 92 lines ...
Oct  2 13:44:56.948: INFO: Pod aws-client still exists
Oct  2 13:44:58.757: INFO: Waiting for pod aws-client to disappear
Oct  2 13:44:58.948: INFO: Pod aws-client still exists
Oct  2 13:45:00.758: INFO: Waiting for pod aws-client to disappear
Oct  2 13:45:00.992: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Oct  2 13:45:01.904: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-08dc33c300ac37e8f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08dc33c300ac37e8f is currently attached to i-0c0b5d5174a6f922e
	status code: 400, request id: 18b7e2db-61c4-4732-bc83-46f997919b24
Oct  2 13:45:07.834: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-08dc33c300ac37e8f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08dc33c300ac37e8f is currently attached to i-0c0b5d5174a6f922e
	status code: 400, request id: 3ad92805-af94-4e03-9f21-b065b19051e8
Oct  2 13:45:13.812: INFO: Successfully deleted PD "aws://ap-southeast-2a/vol-08dc33c300ac37e8f".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:13.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6270" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:15.463 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":12,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:14.457: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 116 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:15.298: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver csi-hostpath doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":9,"skipped":51,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:45:10.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
Oct  2 13:45:07.224: INFO: PersistentVolumeClaim pvc-4zjvb found but phase is Pending instead of Bound.
Oct  2 13:45:09.417: INFO: PersistentVolumeClaim pvc-4zjvb found and phase=Bound (13.349431589s)
Oct  2 13:45:09.417: INFO: Waiting up to 3m0s for PersistentVolume local-wbq62 to have phase Bound
Oct  2 13:45:09.609: INFO: PersistentVolume local-wbq62 found and phase=Bound (192.565987ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xxxx
STEP: Creating a pod to test subpath
Oct  2 13:45:10.177: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xxxx" in namespace "provisioning-6593" to be "Succeeded or Failed"
Oct  2 13:45:10.367: INFO: Pod "pod-subpath-test-preprovisionedpv-xxxx": Phase="Pending", Reason="", readiness=false. Elapsed: 189.149464ms
Oct  2 13:45:12.564: INFO: Pod "pod-subpath-test-preprovisionedpv-xxxx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38656542s
Oct  2 13:45:14.754: INFO: Pod "pod-subpath-test-preprovisionedpv-xxxx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.576397167s
STEP: Saw pod success
Oct  2 13:45:14.754: INFO: Pod "pod-subpath-test-preprovisionedpv-xxxx" satisfied condition "Succeeded or Failed"
Oct  2 13:45:14.943: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-xxxx container test-container-subpath-preprovisionedpv-xxxx: <nil>
STEP: delete the pod
Oct  2 13:45:15.328: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xxxx to disappear
Oct  2 13:45:15.517: INFO: Pod pod-subpath-test-preprovisionedpv-xxxx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xxxx
Oct  2 13:45:15.517: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xxxx" in namespace "provisioning-6593"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":20,"skipped":133,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:18.269: INFO: Only supported for providers [openstack] (not aws)
... skipping 166 lines ...
Oct  2 13:44:10.162: INFO: PersistentVolumeClaim csi-hostpath62mvd found but phase is Pending instead of Bound.
Oct  2 13:44:12.353: INFO: PersistentVolumeClaim csi-hostpath62mvd found but phase is Pending instead of Bound.
Oct  2 13:44:14.545: INFO: PersistentVolumeClaim csi-hostpath62mvd found but phase is Pending instead of Bound.
Oct  2 13:44:16.737: INFO: PersistentVolumeClaim csi-hostpath62mvd found and phase=Bound (8.957803575s)
STEP: Creating pod pod-subpath-test-dynamicpv-wd6f
STEP: Creating a pod to test subpath
Oct  2 13:44:17.311: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-wd6f" in namespace "provisioning-8816" to be "Succeeded or Failed"
Oct  2 13:44:17.507: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 195.683405ms
Oct  2 13:44:19.701: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389126232s
Oct  2 13:44:21.900: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.58856556s
Oct  2 13:44:24.091: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.779869567s
Oct  2 13:44:26.283: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.971594105s
Oct  2 13:44:28.475: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.163693439s
Oct  2 13:44:30.666: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.354864528s
Oct  2 13:44:32.860: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.548583863s
Oct  2 13:44:35.052: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.740046107s
Oct  2 13:44:37.243: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.931453539s
STEP: Saw pod success
Oct  2 13:44:37.243: INFO: Pod "pod-subpath-test-dynamicpv-wd6f" satisfied condition "Succeeded or Failed"
Oct  2 13:44:37.434: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-wd6f container test-container-subpath-dynamicpv-wd6f: <nil>
STEP: delete the pod
Oct  2 13:44:37.826: INFO: Waiting for pod pod-subpath-test-dynamicpv-wd6f to disappear
Oct  2 13:44:38.016: INFO: Pod pod-subpath-test-dynamicpv-wd6f no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-wd6f
Oct  2 13:44:38.016: INFO: Deleting pod "pod-subpath-test-dynamicpv-wd6f" in namespace "provisioning-8816"
STEP: Creating pod pod-subpath-test-dynamicpv-wd6f
STEP: Creating a pod to test subpath
Oct  2 13:44:38.400: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-wd6f" in namespace "provisioning-8816" to be "Succeeded or Failed"
Oct  2 13:44:38.591: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 190.306412ms
Oct  2 13:44:40.784: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383971536s
Oct  2 13:44:42.975: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574843099s
Oct  2 13:44:45.167: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76603767s
Oct  2 13:44:47.358: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.957364548s
Oct  2 13:44:49.549: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.148724993s
Oct  2 13:44:51.742: INFO: Pod "pod-subpath-test-dynamicpv-wd6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.34100096s
STEP: Saw pod success
Oct  2 13:44:51.742: INFO: Pod "pod-subpath-test-dynamicpv-wd6f" satisfied condition "Succeeded or Failed"
Oct  2 13:44:51.932: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod pod-subpath-test-dynamicpv-wd6f container test-container-subpath-dynamicpv-wd6f: <nil>
STEP: delete the pod
Oct  2 13:44:52.320: INFO: Waiting for pod pod-subpath-test-dynamicpv-wd6f to disappear
Oct  2 13:44:52.511: INFO: Pod pod-subpath-test-dynamicpv-wd6f no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-wd6f
Oct  2 13:44:52.511: INFO: Deleting pod "pod-subpath-test-dynamicpv-wd6f" in namespace "provisioning-8816"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":139,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:19.345: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
Oct  2 13:44:51.848: INFO: PersistentVolumeClaim pvc-bkjjp found but phase is Pending instead of Bound.
Oct  2 13:44:54.040: INFO: PersistentVolumeClaim pvc-bkjjp found and phase=Bound (8.953480569s)
Oct  2 13:44:54.040: INFO: Waiting up to 3m0s for PersistentVolume local-7rfmx to have phase Bound
Oct  2 13:44:54.229: INFO: PersistentVolume local-7rfmx found and phase=Bound (189.836389ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zz2n
STEP: Creating a pod to test atomic-volume-subpath
Oct  2 13:44:54.806: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zz2n" in namespace "provisioning-2328" to be "Succeeded or Failed"
Oct  2 13:44:54.996: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Pending", Reason="", readiness=false. Elapsed: 189.873987ms
Oct  2 13:44:57.187: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381138781s
Oct  2 13:44:59.379: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573112085s
Oct  2 13:45:01.571: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Running", Reason="", readiness=true. Elapsed: 6.764428734s
Oct  2 13:45:03.761: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Running", Reason="", readiness=true. Elapsed: 8.954854484s
Oct  2 13:45:05.962: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Running", Reason="", readiness=true. Elapsed: 11.155816155s
Oct  2 13:45:08.154: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Running", Reason="", readiness=true. Elapsed: 13.347477567s
Oct  2 13:45:10.349: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Running", Reason="", readiness=true. Elapsed: 15.542274691s
Oct  2 13:45:12.540: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Running", Reason="", readiness=true. Elapsed: 17.733396542s
Oct  2 13:45:14.731: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Running", Reason="", readiness=true. Elapsed: 19.925208361s
Oct  2 13:45:16.922: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Running", Reason="", readiness=true. Elapsed: 22.115987821s
Oct  2 13:45:19.113: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.307075232s
STEP: Saw pod success
Oct  2 13:45:19.113: INFO: Pod "pod-subpath-test-preprovisionedpv-zz2n" satisfied condition "Succeeded or Failed"
Oct  2 13:45:19.307: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-zz2n container test-container-subpath-preprovisionedpv-zz2n: <nil>
STEP: delete the pod
Oct  2 13:45:19.751: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zz2n to disappear
Oct  2 13:45:19.941: INFO: Pod pod-subpath-test-preprovisionedpv-zz2n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zz2n
Oct  2 13:45:19.942: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zz2n" in namespace "provisioning-2328"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":11,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:22.662: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":10,"skipped":51,"failed":0}
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:45:18.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Oct  2 13:45:19.171: INFO: Waiting up to 5m0s for pod "var-expansion-bf71a394-75e2-40c8-b6ad-5124e9b9d750" in namespace "var-expansion-7421" to be "Succeeded or Failed"
Oct  2 13:45:19.362: INFO: Pod "var-expansion-bf71a394-75e2-40c8-b6ad-5124e9b9d750": Phase="Pending", Reason="", readiness=false. Elapsed: 190.146247ms
Oct  2 13:45:21.552: INFO: Pod "var-expansion-bf71a394-75e2-40c8-b6ad-5124e9b9d750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.380095833s
STEP: Saw pod success
Oct  2 13:45:21.552: INFO: Pod "var-expansion-bf71a394-75e2-40c8-b6ad-5124e9b9d750" satisfied condition "Succeeded or Failed"
Oct  2 13:45:21.741: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod var-expansion-bf71a394-75e2-40c8-b6ad-5124e9b9d750 container dapi-container: <nil>
STEP: delete the pod
Oct  2 13:45:22.149: INFO: Waiting for pod var-expansion-bf71a394-75e2-40c8-b6ad-5124e9b9d750 to disappear
Oct  2 13:45:22.348: INFO: Pod var-expansion-bf71a394-75e2-40c8-b6ad-5124e9b9d750 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 27 lines ...
Oct  2 13:45:08.746: INFO: PersistentVolumeClaim pvc-6n5l6 found but phase is Pending instead of Bound.
Oct  2 13:45:10.936: INFO: PersistentVolumeClaim pvc-6n5l6 found and phase=Bound (4.570535151s)
Oct  2 13:45:10.936: INFO: Waiting up to 3m0s for PersistentVolume local-9gfbr to have phase Bound
Oct  2 13:45:11.126: INFO: PersistentVolume local-9gfbr found and phase=Bound (189.753213ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-lqnt
STEP: Creating a pod to test exec-volume-test
Oct  2 13:45:11.698: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-lqnt" in namespace "volume-904" to be "Succeeded or Failed"
Oct  2 13:45:11.888: INFO: Pod "exec-volume-test-preprovisionedpv-lqnt": Phase="Pending", Reason="", readiness=false. Elapsed: 189.577975ms
Oct  2 13:45:14.077: INFO: Pod "exec-volume-test-preprovisionedpv-lqnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379209997s
Oct  2 13:45:16.268: INFO: Pod "exec-volume-test-preprovisionedpv-lqnt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.569893223s
STEP: Saw pod success
Oct  2 13:45:16.268: INFO: Pod "exec-volume-test-preprovisionedpv-lqnt" satisfied condition "Succeeded or Failed"
Oct  2 13:45:16.458: INFO: Trying to get logs from node ip-172-20-46-238.ap-southeast-2.compute.internal pod exec-volume-test-preprovisionedpv-lqnt container exec-container-preprovisionedpv-lqnt: <nil>
STEP: delete the pod
Oct  2 13:45:16.858: INFO: Waiting for pod exec-volume-test-preprovisionedpv-lqnt to disappear
Oct  2 13:45:17.053: INFO: Pod exec-volume-test-preprovisionedpv-lqnt no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-lqnt
Oct  2 13:45:17.054: INFO: Deleting pod "exec-volume-test-preprovisionedpv-lqnt" in namespace "volume-904"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":51,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":13,"skipped":86,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:22.783: INFO: Only supported for providers [gce gke] (not aws)
... skipping 386 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 21 lines ...
Oct  2 13:44:53.526: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:44:53.717: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:44:54.299: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:44:54.491: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:44:54.682: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:44:54.873: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:44:55.256: INFO: Lookups using dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local]

Oct  2 13:45:00.448: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:00.653: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:00.854: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:01.125: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:01.714: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:01.908: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:02.110: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:02.301: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:02.683: INFO: Lookups using dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local]

Oct  2 13:45:05.464: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:05.664: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:05.865: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:06.056: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:06.637: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:06.827: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:07.018: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:07.209: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:07.592: INFO: Lookups using dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local]

Oct  2 13:45:10.449: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:10.667: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:10.867: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:11.074: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:11.652: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:11.843: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:12.034: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:12.231: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:12.617: INFO: Lookups using dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local]

Oct  2 13:45:15.448: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:15.638: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:15.829: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:16.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:16.592: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:16.783: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:16.974: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:17.166: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local from pod dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516: the server could not find the requested resource (get pods dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516)
Oct  2 13:45:17.548: INFO: Lookups using dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6278.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6278.svc.cluster.local jessie_udp@dns-test-service-2.dns-6278.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6278.svc.cluster.local]

Oct  2 13:45:22.588: INFO: DNS probes using dns-6278/dns-test-a67d1aa4-3d37-4c71-82e6-252ddbcc7516 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:36.371 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Oct  2 13:45:07.094: INFO: PersistentVolumeClaim pvc-zsb8z found but phase is Pending instead of Bound.
Oct  2 13:45:09.283: INFO: PersistentVolumeClaim pvc-zsb8z found and phase=Bound (6.758666745s)
Oct  2 13:45:09.283: INFO: Waiting up to 3m0s for PersistentVolume local-7j5cj to have phase Bound
Oct  2 13:45:09.472: INFO: PersistentVolume local-7j5cj found and phase=Bound (189.082458ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dxbz
STEP: Creating a pod to test subpath
Oct  2 13:45:10.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dxbz" in namespace "provisioning-8369" to be "Succeeded or Failed"
Oct  2 13:45:10.240: INFO: Pod "pod-subpath-test-preprovisionedpv-dxbz": Phase="Pending", Reason="", readiness=false. Elapsed: 190.056868ms
Oct  2 13:45:12.431: INFO: Pod "pod-subpath-test-preprovisionedpv-dxbz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380627566s
Oct  2 13:45:14.621: INFO: Pod "pod-subpath-test-preprovisionedpv-dxbz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570615076s
Oct  2 13:45:16.811: INFO: Pod "pod-subpath-test-preprovisionedpv-dxbz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.760758822s
STEP: Saw pod success
Oct  2 13:45:16.811: INFO: Pod "pod-subpath-test-preprovisionedpv-dxbz" satisfied condition "Succeeded or Failed"
Oct  2 13:45:17.000: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-dxbz container test-container-subpath-preprovisionedpv-dxbz: <nil>
STEP: delete the pod
Oct  2 13:45:17.395: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dxbz to disappear
Oct  2 13:45:17.586: INFO: Pod pod-subpath-test-preprovisionedpv-dxbz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dxbz
Oct  2 13:45:17.586: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dxbz" in namespace "provisioning-8369"
STEP: Creating pod pod-subpath-test-preprovisionedpv-dxbz
STEP: Creating a pod to test subpath
Oct  2 13:45:17.969: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dxbz" in namespace "provisioning-8369" to be "Succeeded or Failed"
Oct  2 13:45:18.158: INFO: Pod "pod-subpath-test-preprovisionedpv-dxbz": Phase="Pending", Reason="", readiness=false. Elapsed: 189.430188ms
Oct  2 13:45:20.352: INFO: Pod "pod-subpath-test-preprovisionedpv-dxbz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.383073842s
STEP: Saw pod success
Oct  2 13:45:20.352: INFO: Pod "pod-subpath-test-preprovisionedpv-dxbz" satisfied condition "Succeeded or Failed"
Oct  2 13:45:20.545: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-dxbz container test-container-subpath-preprovisionedpv-dxbz: <nil>
STEP: delete the pod
Oct  2 13:45:20.941: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dxbz to disappear
Oct  2 13:45:21.131: INFO: Pod pod-subpath-test-preprovisionedpv-dxbz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dxbz
Oct  2 13:45:21.131: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dxbz" in namespace "provisioning-8369"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:23.808: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 170 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:25.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-6468" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":12,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:25.763: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 72 lines ...
• [SLOW TEST:11.404 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":13,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:25.933: INFO: Driver local doesn't support ext3 -- skipping
... skipping 72 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct  2 13:45:19.324: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 13:45:19.521: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9fmn
STEP: Creating a pod to test subpath
Oct  2 13:45:19.743: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9fmn" in namespace "provisioning-6200" to be "Succeeded or Failed"
Oct  2 13:45:19.935: INFO: Pod "pod-subpath-test-inlinevolume-9fmn": Phase="Pending", Reason="", readiness=false. Elapsed: 192.759619ms
Oct  2 13:45:22.127: INFO: Pod "pod-subpath-test-inlinevolume-9fmn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384208952s
Oct  2 13:45:24.371: INFO: Pod "pod-subpath-test-inlinevolume-9fmn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.6285614s
STEP: Saw pod success
Oct  2 13:45:24.371: INFO: Pod "pod-subpath-test-inlinevolume-9fmn" satisfied condition "Succeeded or Failed"
Oct  2 13:45:24.578: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-9fmn container test-container-volume-inlinevolume-9fmn: <nil>
STEP: delete the pod
Oct  2 13:45:25.015: INFO: Waiting for pod pod-subpath-test-inlinevolume-9fmn to disappear
Oct  2 13:45:25.204: INFO: Pod pod-subpath-test-inlinevolume-9fmn no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9fmn
Oct  2 13:45:25.204: INFO: Deleting pod "pod-subpath-test-inlinevolume-9fmn" in namespace "provisioning-6200"
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":21,"skipped":150,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:25.985: INFO: Only supported for providers [vsphere] (not aws)
... skipping 120 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":12,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:26.030: INFO: Only supported for providers [gce gke] (not aws)
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:27.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4957" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":14,"skipped":102,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 423 lines ...
• [SLOW TEST:13.150 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":6,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:28.500: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 72 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct  2 13:45:20.354: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  2 13:45:20.354: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-lx55
STEP: Creating a pod to test subpath
Oct  2 13:45:20.552: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-lx55" in namespace "provisioning-2702" to be "Succeeded or Failed"
Oct  2 13:45:20.749: INFO: Pod "pod-subpath-test-inlinevolume-lx55": Phase="Pending", Reason="", readiness=false. Elapsed: 196.973453ms
Oct  2 13:45:22.941: INFO: Pod "pod-subpath-test-inlinevolume-lx55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388358041s
Oct  2 13:45:25.133: INFO: Pod "pod-subpath-test-inlinevolume-lx55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580833591s
Oct  2 13:45:27.325: INFO: Pod "pod-subpath-test-inlinevolume-lx55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.772818882s
STEP: Saw pod success
Oct  2 13:45:27.325: INFO: Pod "pod-subpath-test-inlinevolume-lx55" satisfied condition "Succeeded or Failed"
Oct  2 13:45:27.516: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-lx55 container test-container-volume-inlinevolume-lx55: <nil>
STEP: delete the pod
Oct  2 13:45:27.921: INFO: Waiting for pod pod-subpath-test-inlinevolume-lx55 to disappear
Oct  2 13:45:28.111: INFO: Pod pod-subpath-test-inlinevolume-lx55 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-lx55
Oct  2 13:45:28.111: INFO: Deleting pod "pod-subpath-test-inlinevolume-lx55" in namespace "provisioning-2702"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":143,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:28.912: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
Oct  2 13:45:26.011: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  2 13:45:26.011: INFO: stdout: "controller-manager scheduler etcd-1 etcd-0"
STEP: getting details of componentstatuses
STEP: getting status of controller-manager
Oct  2 13:45:26.011: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-625 get componentstatuses controller-manager'
Oct  2 13:45:26.703: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  2 13:45:26.703: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of scheduler
Oct  2 13:45:26.703: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-625 get componentstatuses scheduler'
Oct  2 13:45:27.383: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  2 13:45:27.383: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-1
Oct  2 13:45:27.383: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-625 get componentstatuses etcd-1'
Oct  2 13:45:28.067: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  2 13:45:28.067: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-0
Oct  2 13:45:28.067: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-625 get componentstatuses etcd-0'
Oct  2 13:45:28.740: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct  2 13:45:28.740: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:28.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-625" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":14,"skipped":123,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:29.146: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Oct  2 13:45:24.413: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  2 13:45:24.647: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7856
STEP: Creating a pod to test subpath
Oct  2 13:45:24.840: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7856" in namespace "provisioning-450" to be "Succeeded or Failed"
Oct  2 13:45:25.031: INFO: Pod "pod-subpath-test-inlinevolume-7856": Phase="Pending", Reason="", readiness=false. Elapsed: 191.710824ms
Oct  2 13:45:27.222: INFO: Pod "pod-subpath-test-inlinevolume-7856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382647348s
Oct  2 13:45:29.419: INFO: Pod "pod-subpath-test-inlinevolume-7856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.57912965s
STEP: Saw pod success
Oct  2 13:45:29.419: INFO: Pod "pod-subpath-test-inlinevolume-7856" satisfied condition "Succeeded or Failed"
Oct  2 13:45:29.609: INFO: Trying to get logs from node ip-172-20-42-183.ap-southeast-2.compute.internal pod pod-subpath-test-inlinevolume-7856 container test-container-subpath-inlinevolume-7856: <nil>
STEP: delete the pod
Oct  2 13:45:30.000: INFO: Waiting for pod pod-subpath-test-inlinevolume-7856 to disappear
Oct  2 13:45:30.192: INFO: Pod pod-subpath-test-inlinevolume-7856 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-7856
Oct  2 13:45:30.192: INFO: Deleting pod "pod-subpath-test-inlinevolume-7856" in namespace "provisioning-450"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":54,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":11,"skipped":148,"failed":0}
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:44:01.849: INFO: >>> kubeConfig: /root/.kube/config
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":12,"skipped":148,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:31.158: INFO: Only supported for providers [gce gke] (not aws)
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:31.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1210" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":7,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:31.824: INFO: Only supported for providers [gce gke] (not aws)
... skipping 26 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Oct  2 13:45:24.997: INFO: Waiting up to 5m0s for pod "pod-always-succeedf6cc6a5c-9435-4a14-96ff-d19e409ced86" in namespace "pods-3566" to be "Succeeded or Failed"
Oct  2 13:45:25.196: INFO: Pod "pod-always-succeedf6cc6a5c-9435-4a14-96ff-d19e409ced86": Phase="Pending", Reason="", readiness=false. Elapsed: 198.442139ms
Oct  2 13:45:27.389: INFO: Pod "pod-always-succeedf6cc6a5c-9435-4a14-96ff-d19e409ced86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391806929s
Oct  2 13:45:29.579: INFO: Pod "pod-always-succeedf6cc6a5c-9435-4a14-96ff-d19e409ced86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.582354452s
STEP: Saw pod success
Oct  2 13:45:29.580: INFO: Pod "pod-always-succeedf6cc6a5c-9435-4a14-96ff-d19e409ced86" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:31.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Oct  2 13:45:23.103: INFO: PersistentVolumeClaim pvc-26gfk found but phase is Pending instead of Bound.
Oct  2 13:45:25.328: INFO: PersistentVolumeClaim pvc-26gfk found and phase=Bound (13.385644843s)
Oct  2 13:45:25.328: INFO: Waiting up to 3m0s for PersistentVolume local-jzlt7 to have phase Bound
Oct  2 13:45:25.525: INFO: PersistentVolume local-jzlt7 found and phase=Bound (197.381375ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6j6j
STEP: Creating a pod to test subpath
Oct  2 13:45:26.101: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6j6j" in namespace "provisioning-3773" to be "Succeeded or Failed"
Oct  2 13:45:26.292: INFO: Pod "pod-subpath-test-preprovisionedpv-6j6j": Phase="Pending", Reason="", readiness=false. Elapsed: 191.256846ms
Oct  2 13:45:28.487: INFO: Pod "pod-subpath-test-preprovisionedpv-6j6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386572452s
Oct  2 13:45:30.683: INFO: Pod "pod-subpath-test-preprovisionedpv-6j6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.582586798s
STEP: Saw pod success
Oct  2 13:45:30.684: INFO: Pod "pod-subpath-test-preprovisionedpv-6j6j" satisfied condition "Succeeded or Failed"
Oct  2 13:45:30.875: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-6j6j container test-container-volume-preprovisionedpv-6j6j: <nil>
STEP: delete the pod
Oct  2 13:45:31.273: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6j6j to disappear
Oct  2 13:45:31.464: INFO: Pod pod-subpath-test-preprovisionedpv-6j6j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6j6j
Oct  2 13:45:31.464: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6j6j" in namespace "provisioning-3773"
... skipping 31 lines ...
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  2 13:45:30.300: INFO: The status of Pod server-envvars-61c3eada-64b8-4040-a836-9736257d837c is Pending, waiting for it to be Running (with Ready = true)
Oct  2 13:45:32.492: INFO: The status of Pod server-envvars-61c3eada-64b8-4040-a836-9736257d837c is Running (Ready = true)
Oct  2 13:45:33.072: INFO: Waiting up to 5m0s for pod "client-envvars-d290508f-d009-4a42-8ee0-4c4cea38fcae" in namespace "pods-5225" to be "Succeeded or Failed"
Oct  2 13:45:33.263: INFO: Pod "client-envvars-d290508f-d009-4a42-8ee0-4c4cea38fcae": Phase="Pending", Reason="", readiness=false. Elapsed: 190.842673ms
Oct  2 13:45:35.469: INFO: Pod "client-envvars-d290508f-d009-4a42-8ee0-4c4cea38fcae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396342783s
Oct  2 13:45:37.669: INFO: Pod "client-envvars-d290508f-d009-4a42-8ee0-4c4cea38fcae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.59700993s
STEP: Saw pod success
Oct  2 13:45:37.669: INFO: Pod "client-envvars-d290508f-d009-4a42-8ee0-4c4cea38fcae" satisfied condition "Succeeded or Failed"
Oct  2 13:45:37.871: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod client-envvars-d290508f-d009-4a42-8ee0-4c4cea38fcae container env3cont: <nil>
STEP: delete the pod
Oct  2 13:45:38.283: INFO: Waiting for pod client-envvars-d290508f-d009-4a42-8ee0-4c4cea38fcae to disappear
Oct  2 13:45:38.487: INFO: Pod client-envvars-d290508f-d009-4a42-8ee0-4c4cea38fcae no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.946 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":151,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:38.919: INFO: Only supported for providers [gce gke] (not aws)
... skipping 213 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:44:44.471: INFO: >>> kubeConfig: /root/.kube/config
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":6,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:40.568: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
Oct  2 13:45:15.173: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct  2 13:45:15.621: INFO: Successfully created a new PD: "aws://ap-southeast-2a/vol-028b52c9a4cc9b7de".
Oct  2 13:45:15.621: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-vx42
STEP: Creating a pod to test exec-volume-test
Oct  2 13:45:15.815: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-vx42" in namespace "volume-8511" to be "Succeeded or Failed"
Oct  2 13:45:16.007: INFO: Pod "exec-volume-test-inlinevolume-vx42": Phase="Pending", Reason="", readiness=false. Elapsed: 192.069125ms
Oct  2 13:45:18.198: INFO: Pod "exec-volume-test-inlinevolume-vx42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382941689s
Oct  2 13:45:20.395: INFO: Pod "exec-volume-test-inlinevolume-vx42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579554471s
Oct  2 13:45:22.586: INFO: Pod "exec-volume-test-inlinevolume-vx42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.770955378s
Oct  2 13:45:24.787: INFO: Pod "exec-volume-test-inlinevolume-vx42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.97205845s
Oct  2 13:45:26.978: INFO: Pod "exec-volume-test-inlinevolume-vx42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.163118714s
STEP: Saw pod success
Oct  2 13:45:26.978: INFO: Pod "exec-volume-test-inlinevolume-vx42" satisfied condition "Succeeded or Failed"
Oct  2 13:45:27.167: INFO: Trying to get logs from node ip-172-20-49-155.ap-southeast-2.compute.internal pod exec-volume-test-inlinevolume-vx42 container exec-container-inlinevolume-vx42: <nil>
STEP: delete the pod
Oct  2 13:45:27.663: INFO: Waiting for pod exec-volume-test-inlinevolume-vx42 to disappear
Oct  2 13:45:27.853: INFO: Pod exec-volume-test-inlinevolume-vx42 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-vx42
Oct  2 13:45:27.853: INFO: Deleting pod "exec-volume-test-inlinevolume-vx42" in namespace "volume-8511"
Oct  2 13:45:28.328: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-028b52c9a4cc9b7de", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-028b52c9a4cc9b7de is currently attached to i-00ac6f3633b38331e
	status code: 400, request id: cd6e250f-02a0-4ad9-9892-5bb50552d035
Oct  2 13:45:34.246: INFO: Couldn't delete PD "aws://ap-southeast-2a/vol-028b52c9a4cc9b7de", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-028b52c9a4cc9b7de is currently attached to i-00ac6f3633b38331e
	status code: 400, request id: 77aa6bae-0008-405e-826f-e4dd370c6010
Oct  2 13:45:40.160: INFO: Successfully deleted PD "aws://ap-southeast-2a/vol-028b52c9a4cc9b7de".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:40.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8511" for this suite.
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:41.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9754" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":7,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:42.028: INFO: Only supported for providers [gce gke] (not aws)
... skipping 248 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":6,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 23 lines ...
Oct  2 13:45:21.718: INFO: PersistentVolumeClaim pvc-cg8lr found but phase is Pending instead of Bound.
Oct  2 13:45:23.909: INFO: PersistentVolumeClaim pvc-cg8lr found and phase=Bound (13.34810158s)
Oct  2 13:45:23.909: INFO: Waiting up to 3m0s for PersistentVolume local-s9btp to have phase Bound
Oct  2 13:45:24.100: INFO: PersistentVolume local-s9btp found and phase=Bound (190.869728ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-hjh9
STEP: Creating a pod to test exec-volume-test
Oct  2 13:45:24.767: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-hjh9" in namespace "volume-5277" to be "Succeeded or Failed"
Oct  2 13:45:24.958: INFO: Pod "exec-volume-test-preprovisionedpv-hjh9": Phase="Pending", Reason="", readiness=false. Elapsed: 190.857799ms
Oct  2 13:45:27.150: INFO: Pod "exec-volume-test-preprovisionedpv-hjh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383077686s
Oct  2 13:45:29.344: INFO: Pod "exec-volume-test-preprovisionedpv-hjh9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576668646s
Oct  2 13:45:31.537: INFO: Pod "exec-volume-test-preprovisionedpv-hjh9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.77005753s
Oct  2 13:45:33.730: INFO: Pod "exec-volume-test-preprovisionedpv-hjh9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963010705s
Oct  2 13:45:35.924: INFO: Pod "exec-volume-test-preprovisionedpv-hjh9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.156914696s
Oct  2 13:45:38.151: INFO: Pod "exec-volume-test-preprovisionedpv-hjh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.384208048s
STEP: Saw pod success
Oct  2 13:45:38.151: INFO: Pod "exec-volume-test-preprovisionedpv-hjh9" satisfied condition "Succeeded or Failed"
Oct  2 13:45:38.362: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod exec-volume-test-preprovisionedpv-hjh9 container exec-container-preprovisionedpv-hjh9: <nil>
STEP: delete the pod
Oct  2 13:45:38.794: INFO: Waiting for pod exec-volume-test-preprovisionedpv-hjh9 to disappear
Oct  2 13:45:38.991: INFO: Pod exec-volume-test-preprovisionedpv-hjh9 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-hjh9
Oct  2 13:45:38.991: INFO: Deleting pod "exec-volume-test-preprovisionedpv-hjh9" in namespace "volume-5277"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:44.124: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 99 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":5,"skipped":13,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:45:32.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
• [SLOW TEST:14.348 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":6,"skipped":13,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
STEP: Destroying namespace "services-4762" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":10,"skipped":90,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:48.080: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 33 lines ...
Oct  2 13:45:45.826: INFO: The status of Pod pod-update-activedeadlineseconds-2476ea38-b6fb-48dd-95fa-04dc0b9ec546 is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct  2 13:45:47.093: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2476ea38-b6fb-48dd-95fa-04dc0b9ec546"
Oct  2 13:45:47.094: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2476ea38-b6fb-48dd-95fa-04dc0b9ec546" in namespace "pods-1164" to be "terminated due to deadline exceeded"
Oct  2 13:45:47.284: INFO: Pod "pod-update-activedeadlineseconds-2476ea38-b6fb-48dd-95fa-04dc0b9ec546": Phase="Running", Reason="", readiness=true. Elapsed: 190.22952ms
Oct  2 13:45:49.480: INFO: Pod "pod-update-activedeadlineseconds-2476ea38-b6fb-48dd-95fa-04dc0b9ec546": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.386356799s
Oct  2 13:45:49.480: INFO: Pod "pod-update-activedeadlineseconds-2476ea38-b6fb-48dd-95fa-04dc0b9ec546" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:49.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1164" for this suite.


• [SLOW TEST:7.570 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:49.935: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 89 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":8,"skipped":44,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:50.535: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 28 lines ...
Oct  2 13:45:46.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  2 13:45:47.885: INFO: Waiting up to 5m0s for pod "security-context-822416bd-2a4d-453a-8dc3-9c3dbc22c757" in namespace "security-context-8796" to be "Succeeded or Failed"
Oct  2 13:45:48.077: INFO: Pod "security-context-822416bd-2a4d-453a-8dc3-9c3dbc22c757": Phase="Pending", Reason="", readiness=false. Elapsed: 192.075197ms
Oct  2 13:45:50.268: INFO: Pod "security-context-822416bd-2a4d-453a-8dc3-9c3dbc22c757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.382602212s
STEP: Saw pod success
Oct  2 13:45:50.268: INFO: Pod "security-context-822416bd-2a4d-453a-8dc3-9c3dbc22c757" satisfied condition "Succeeded or Failed"
Oct  2 13:45:50.458: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod security-context-822416bd-2a4d-453a-8dc3-9c3dbc22c757 container test-container: <nil>
STEP: delete the pod
Oct  2 13:45:50.844: INFO: Waiting for pod security-context-822416bd-2a4d-453a-8dc3-9c3dbc22c757 to disappear
Oct  2 13:45:51.033: INFO: Pod security-context-822416bd-2a4d-453a-8dc3-9c3dbc22c757 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:51.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-8796" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":7,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:51.445: INFO: Only supported for providers [azure] (not aws)
... skipping 47 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-79f0002b-1d71-44d5-a241-562f73ecefc9
STEP: Creating a pod to test consume configMaps
Oct  2 13:45:51.898: INFO: Waiting up to 5m0s for pod "pod-configmaps-285d1f47-661b-40fa-bffc-59bcdba88282" in namespace "configmap-7381" to be "Succeeded or Failed"
Oct  2 13:45:52.088: INFO: Pod "pod-configmaps-285d1f47-661b-40fa-bffc-59bcdba88282": Phase="Pending", Reason="", readiness=false. Elapsed: 189.900571ms
Oct  2 13:45:54.278: INFO: Pod "pod-configmaps-285d1f47-661b-40fa-bffc-59bcdba88282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.379863717s
STEP: Saw pod success
Oct  2 13:45:54.278: INFO: Pod "pod-configmaps-285d1f47-661b-40fa-bffc-59bcdba88282" satisfied condition "Succeeded or Failed"
Oct  2 13:45:54.468: INFO: Trying to get logs from node ip-172-20-33-188.ap-southeast-2.compute.internal pod pod-configmaps-285d1f47-661b-40fa-bffc-59bcdba88282 container agnhost-container: <nil>
STEP: delete the pod
Oct  2 13:45:54.893: INFO: Waiting for pod pod-configmaps-285d1f47-661b-40fa-bffc-59bcdba88282 to disappear
Oct  2 13:45:55.083: INFO: Pod pod-configmaps-285d1f47-661b-40fa-bffc-59bcdba88282 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:55.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7381" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:55.567: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver local doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":73,"failed":0}
[BeforeEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:45:32.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ssh
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
Oct  2 13:45:48.963: INFO: Got stdout from 3.25.92.33:22: Hello from ubuntu@ip-172-20-49-155
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Oct  2 13:45:53.578: INFO: Got stdout from 3.106.56.47:22: stdout
Oct  2 13:45:53.578: INFO: Got stderr from 3.106.56.47:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing ubuntu@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:58.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-224" for this suite.


• [SLOW TEST:26.089 seconds]
[sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":9,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:45:58.975: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":12,"skipped":64,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  2 13:45:34.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 34 lines ...
Oct  2 13:45:51.832: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3258 explain e2e-test-crd-publish-openapi-6198-crds.spec'
Oct  2 13:45:52.660: INFO: stderr: ""
Oct  2 13:45:52.660: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6198-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Oct  2 13:45:52.660: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3258 explain e2e-test-crd-publish-openapi-6198-crds.spec.bars'
Oct  2 13:45:53.531: INFO: stderr: ""
Oct  2 13:45:53.531: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6198-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Oct  2 13:45:53.532: INFO: Running '/tmp/kubectl3829199734/kubectl --server=https://api.e2e-de872154ff-19973.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3258 explain e2e-test-crd-publish-openapi-6198-crds.spec.bars2'
Oct  2 13:45:54.380: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  2 13:45:59.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3258" for this suite.
... skipping 2 lines ...
• [SLOW TEST:25.924 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":13,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:46:00.140: INFO: Only supported for providers [azure] (not aws)
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  2 13:46:01.426: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41667 lines ...






nE1002 13:56:40.170296       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:40.351470       1 aws.go:2045] Releasing in-process attachment entry: bo -> volume vol-0148edae8bb66633d\nI1002 13:56:40.351830       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-b7622837-7b54-4324-86c4-88c21796a514\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0148edae8bb66633d\") from node \"ip-172-20-33-188.ap-southeast-2.compute.internal\" \nI1002 13:56:40.352371       1 event.go:291] \"Event occurred\" object=\"provisioning-6097/pod-subpath-test-dynamicpv-rlpk\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-b7622837-7b54-4324-86c4-88c21796a514\\\" \"\nI1002 13:56:40.401804       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-6969\nI1002 13:56:40.423517       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [stable.example.com/v2, Resource=e2e-test-crd-webhook-2506-crds], removed: []\nI1002 13:56:40.425278       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-webhook-2506-crds.stable.example.com\nI1002 13:56:40.429117       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI1002 13:56:40.529688       1 shared_informer.go:247] Caches are synced for resource quota \nI1002 13:56:40.529792       1 resource_quota_controller.go:454] synced quota controller\nE1002 13:56:40.710443       1 tokens_controller.go:262] error synchronizing serviceaccount projected-2397/default: secrets \"default-token-c5jt7\" is forbidden: unable to create new content in namespace projected-2397 because it is being terminated\nE1002 13:56:40.750480       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-8182/pvc-msfx2: storageclass.storage.k8s.io \"provisioning-8182\" not found\nI1002 13:56:40.751187       1 event.go:291] \"Event occurred\" object=\"provisioning-8182/pvc-msfx2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8182\\\" not found\"\nE1002 13:56:40.757871       1 tokens_controller.go:262] error synchronizing serviceaccount secret-namespace-8969/default: secrets \"default-token-lk2sk\" is forbidden: unable to create new content in namespace secret-namespace-8969 because it is being terminated\nI1002 13:56:40.844799       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9850-4264/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1002 13:56:40.958471       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [stable.example.com/v2, Resource=e2e-test-crd-webhook-2506-crds], removed: []\nI1002 13:56:40.976387       1 pv_controller.go:879] volume \"local-x56z8\" entered phase \"Available\"\nI1002 13:56:40.993529       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI1002 13:56:40.993613       1 shared_informer.go:247] Caches are synced for garbage collector \nI1002 13:56:40.993626       1 garbagecollector.go:254] synced garbage collector\nI1002 13:56:41.192608       1 namespace_controller.go:185] Namespace has been deleted services-1713\nE1002 13:56:41.280198       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:41.646981       1 namespace_controller.go:185] Namespace has been deleted multi-az-2264\nI1002 13:56:41.996387       1 namespace_controller.go:185] Namespace has been deleted secrets-6134\nI1002 13:56:41.997492       1 namespace_controller.go:185] Namespace has been deleted volume-6144\nE1002 13:56:42.827937       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-3707/default: secrets \"default-token-v5czg\" is forbidden: unable to create new content in namespace security-context-3707 because it is being terminated\nI1002 13:56:42.911376       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4973/e2e-test-crd-conversion-webhook-6vj8f\" objectUID=4fe0d0be-1b2f-499e-8d77-9e2b2c4e395d kind=\"EndpointSlice\" virtual=false\nI1002 13:56:42.918294       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4973/e2e-test-crd-conversion-webhook-6vj8f\" objectUID=4fe0d0be-1b2f-499e-8d77-9e2b2c4e395d kind=\"EndpointSlice\" propagationPolicy=Background\nE1002 13:56:42.918871       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:43.086465       1 pv_controller.go:879] volume \"local-pvpd7lb\" entered phase \"Available\"\nI1002 13:56:43.129935       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4973/sample-crd-conversion-webhook-deployment-697cdbd8f4\" objectUID=4eef0cee-baa5-4828-8757-703519250a60 kind=\"ReplicaSet\" virtual=false\nI1002 13:56:43.130618       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"crd-webhook-4973/sample-crd-conversion-webhook-deployment\"\nI1002 13:56:43.146122       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4973/sample-crd-conversion-webhook-deployment-697cdbd8f4\" objectUID=4eef0cee-baa5-4828-8757-703519250a60 kind=\"ReplicaSet\" propagationPolicy=Background\nI1002 13:56:43.157280       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4973/sample-crd-conversion-webhook-deployment-697cdbd8f4-8n728\" objectUID=cf4b65f0-9311-41b5-bd75-1fd0b402b06c kind=\"Pod\" virtual=false\nI1002 13:56:43.165972       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4973/sample-crd-conversion-webhook-deployment-697cdbd8f4-8n728\" objectUID=cf4b65f0-9311-41b5-bd75-1fd0b402b06c kind=\"Pod\" propagationPolicy=Background\nI1002 13:56:43.270344       1 pv_controller.go:930] claim \"persistent-local-volumes-test-4827/pvc-f6xgz\" bound to volume \"local-pvpd7lb\"\nI1002 13:56:43.279790       1 pv_controller.go:879] volume \"local-pvpd7lb\" entered phase \"Bound\"\nI1002 13:56:43.279822       1 pv_controller.go:982] volume \"local-pvpd7lb\" bound to claim \"persistent-local-volumes-test-4827/pvc-f6xgz\"\nI1002 13:56:43.292335       1 pv_controller.go:823] claim \"persistent-local-volumes-test-4827/pvc-f6xgz\" entered phase \"Bound\"\nI1002 13:56:43.409810       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-7935\nI1002 13:56:43.662250       1 namespace_controller.go:185] Namespace has been deleted volume-5169\nI1002 13:56:43.736556       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1002 13:56:44.112298       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE1002 13:56:44.269529       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1002 13:56:44.395222       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:44.630523       1 namespace_controller.go:185] Namespace has been deleted cronjob-2756\nI1002 13:56:45.093405       1 namespace_controller.go:185] Namespace has been deleted resourcequota-8037\nI1002 13:56:45.109478       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-746/agnhost-primary\" need=1 creating=1\nI1002 13:56:45.115635       1 event.go:291] \"Event occurred\" object=\"kubectl-746/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-qcjqh\"\nI1002 13:56:45.687587       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8143\nI1002 13:56:45.891961       1 namespace_controller.go:185] Namespace has been deleted projected-2397\nI1002 13:56:46.079220       1 namespace_controller.go:185] Namespace has been deleted secret-namespace-8969\nI1002 13:56:46.321305       1 controller_ref_manager.go:229] patching pod kubectl-746_agnhost-primary-qcjqh to remove its controllerRef to v1/ReplicationController:agnhost-primary\nI1002 13:56:46.326461       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-746/agnhost-primary\" objectUID=ba9b6591-c299-4827-b48b-af524c7c33bd kind=\"ReplicationController\" virtual=false\nI1002 13:56:46.326632       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-746/agnhost-primary\" need=1 creating=1\nI1002 13:56:46.333904       1 garbagecollector.go:510] object [v1/ReplicationController, namespace: kubectl-746, name: agnhost-primary, uid: ba9b6591-c299-4827-b48b-af524c7c33bd]'s doesn't have an owner, continue on next item\nI1002 13:56:46.334127       1 event.go:291] \"Event occurred\" object=\"kubectl-746/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-hsmmw\"\nE1002 13:56:46.803851       1 tokens_controller.go:262] error synchronizing serviceaccount apply-4965/default: secrets \"default-token-99xmq\" is forbidden: unable to create new content in namespace apply-4965 because it is being terminated\nE1002 13:56:46.874665       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:47.069557       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6512/pvc-9clqd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6512\\\" or manually created by system administrator\"\nI1002 13:56:47.083865       1 pv_controller.go:879] volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" entered phase \"Bound\"\nI1002 13:56:47.083896       1 pv_controller.go:982] volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" bound to claim \"csi-mock-volumes-6512/pvc-9clqd\"\nI1002 13:56:47.090916       1 pv_controller.go:823] claim \"csi-mock-volumes-6512/pvc-9clqd\" entered phase \"Bound\"\nE1002 13:56:47.319000       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-6916/default: secrets \"default-token-7vp5d\" is forbidden: unable to create new content in namespace emptydir-6916 because it is being terminated\nE1002 13:56:47.441720       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:47.919476       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-78f32896-add0-463f-a77b-932722738ef8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9686^83584753-2388-11ec-9914-6295edf3d436\") on node \"ip-172-20-46-238.ap-southeast-2.compute.internal\" \nI1002 13:56:47.919743       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6512^4\") from node \"ip-172-20-49-155.ap-southeast-2.compute.internal\" \nI1002 13:56:47.928815       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-78f32896-add0-463f-a77b-932722738ef8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9686^83584753-2388-11ec-9914-6295edf3d436\") on node \"ip-172-20-46-238.ap-southeast-2.compute.internal\" \nI1002 13:56:48.101128       1 namespace_controller.go:185] Namespace has been deleted kubectl-9302\nI1002 13:56:48.126791       1 namespace_controller.go:185] Namespace has been deleted volume-293-8506\nI1002 13:56:48.209133       1 namespace_controller.go:185] Namespace has been deleted security-context-3707\nI1002 13:56:48.461577       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6512^4\") from node \"ip-172-20-49-155.ap-southeast-2.compute.internal\" \nI1002 13:56:48.461695       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6512/pvc-volume-tester-vjlxf\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\\\" \"\nI1002 13:56:48.503458       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-78f32896-add0-463f-a77b-932722738ef8\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9686^83584753-2388-11ec-9914-6295edf3d436\") on node \"ip-172-20-46-238.ap-southeast-2.compute.internal\" \nI1002 13:56:49.002712       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-6097/aws98brb\"\nI1002 13:56:49.007198       1 pv_controller.go:640] volume \"pvc-b7622837-7b54-4324-86c4-88c21796a514\" is released and reclaim policy \"Delete\" will be executed\nI1002 13:56:49.010352       1 pv_controller.go:879] volume \"pvc-b7622837-7b54-4324-86c4-88c21796a514\" entered phase \"Released\"\nI1002 13:56:49.011888       1 pv_controller.go:1341] isVolumeReleased[pvc-b7622837-7b54-4324-86c4-88c21796a514]: volume is released\nI1002 13:56:49.164237       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0148edae8bb66633d: error deleting EBS volume \"vol-0148edae8bb66633d\" since volume is currently attached to \"i-05ed96ed47c12c5c8\"\nE1002 13:56:49.164301       1 goroutinemap.go:150] Operation for \"delete-pvc-b7622837-7b54-4324-86c4-88c21796a514[27d3a73d-01b1-42d3-b282-0f2a407b4568]\" failed. No retries permitted until 2021-10-02 13:56:49.664280879 +0000 UTC m=+1315.992490282 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0148edae8bb66633d\\\" since volume is currently attached to \\\"i-05ed96ed47c12c5c8\\\"\"\nI1002 13:56:49.164350       1 event.go:291] \"Event occurred\" object=\"pvc-b7622837-7b54-4324-86c4-88c21796a514\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0148edae8bb66633d\\\" since volume is currently attached to \\\"i-05ed96ed47c12c5c8\\\"\"\nI1002 13:56:49.308522       1 namespace_controller.go:185] Namespace has been deleted security-context-test-2643\nI1002 13:56:49.548532       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7354\nI1002 13:56:49.850914       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b7622837-7b54-4324-86c4-88c21796a514\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0148edae8bb66633d\") on node \"ip-172-20-33-188.ap-southeast-2.compute.internal\" \nI1002 13:56:49.853401       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-b7622837-7b54-4324-86c4-88c21796a514\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0148edae8bb66633d\") on node \"ip-172-20-33-188.ap-southeast-2.compute.internal\" \nI1002 13:56:49.898363       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4827/pod-93f6135d-b306-462b-92af-f896ed810fc7\" PVC=\"persistent-local-volumes-test-4827/pvc-f6xgz\"\nI1002 13:56:49.898386       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4827/pvc-f6xgz\"\nI1002 13:56:51.225849       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7354-6607/csi-mockplugin-0\" objectUID=b511b23a-fd68-41f6-a126-751633284daa kind=\"Pod\" virtual=false\nI1002 13:56:51.225856       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-7354-6607/csi-mockplugin\nI1002 13:56:51.225993       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7354-6607/csi-mockplugin-7c4fbc964d\" objectUID=31b63a93-4b7a-4c0d-89c9-19f950d820f9 kind=\"ControllerRevision\" virtual=false\nI1002 13:56:51.228810       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7354-6607/csi-mockplugin-7c4fbc964d\" objectUID=31b63a93-4b7a-4c0d-89c9-19f950d820f9 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:56:51.229237       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7354-6607/csi-mockplugin-0\" objectUID=b511b23a-fd68-41f6-a126-751633284daa kind=\"Pod\" propagationPolicy=Background\nE1002 13:56:51.576559       1 tokens_controller.go:262] error synchronizing serviceaccount multi-az-2104/default: secrets \"default-token-frlkv\" is forbidden: unable to create new content in namespace multi-az-2104 because it is being terminated\nE1002 13:56:51.584238       1 tokens_controller.go:262] error synchronizing serviceaccount kubelet-test-7388/default: secrets \"default-token-qlght\" is forbidden: unable to create new content in namespace kubelet-test-7388 because it is being terminated\nE1002 13:56:51.599056       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:51.958576       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-746/agnhost-primary-hsmmw\" objectUID=b11df50d-0e75-4d16-b5d1-82153fd6afd8 kind=\"Pod\" virtual=false\nI1002 13:56:51.961228       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-746/agnhost-primary-hsmmw\" objectUID=b11df50d-0e75-4d16-b5d1-82153fd6afd8 kind=\"Pod\" propagationPolicy=Background\nE1002 13:56:51.995163       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-746/default: secrets \"default-token-mxbr4\" is forbidden: unable to create new content in namespace kubectl-746 because it is being terminated\nI1002 13:56:52.035465       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9850/pvc-cghhz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1002 13:56:52.237566       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9850/pvc-cghhz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9850\\\" or manually created by system administrator\"\nE1002 13:56:52.312184       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:52.408432       1 namespace_controller.go:185] Namespace has been deleted emptydir-6916\nI1002 13:56:52.438685       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9850/pvc-cghhz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod pvc-volume-tester-9qndx to be scheduled\"\nI1002 13:56:52.633546       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4827/pod-93f6135d-b306-462b-92af-f896ed810fc7\" PVC=\"persistent-local-volumes-test-4827/pvc-f6xgz\"\nI1002 13:56:52.633570       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4827/pvc-f6xgz\"\nI1002 13:56:52.866927       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-1499/awswqvmb\"\nI1002 13:56:52.879280       1 pv_controller.go:640] volume \"pvc-0e2b595c-818c-4ad5-8e23-b251746158fb\" is released and reclaim policy \"Delete\" will be executed\nI1002 13:56:52.882832       1 pv_controller.go:879] volume \"pvc-0e2b595c-818c-4ad5-8e23-b251746158fb\" entered phase \"Released\"\nI1002 13:56:52.884318       1 pv_controller.go:1341] isVolumeReleased[pvc-0e2b595c-818c-4ad5-8e23-b251746158fb]: volume is released\nI1002 13:56:52.895378       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-4973\nI1002 13:56:53.041944       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0dea2f227751a855a: error deleting EBS volume \"vol-0dea2f227751a855a\" since volume is currently attached to \"i-04eb7a9acdc53fb9b\"\nE1002 13:56:53.042145       1 goroutinemap.go:150] Operation for \"delete-pvc-0e2b595c-818c-4ad5-8e23-b251746158fb[cebc607d-7f92-41cc-9feb-5e284fea0e15]\" failed. No retries permitted until 2021-10-02 13:56:53.542120224 +0000 UTC m=+1319.870329632 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0dea2f227751a855a\\\" since volume is currently attached to \\\"i-04eb7a9acdc53fb9b\\\"\"\nI1002 13:56:53.042381       1 event.go:291] \"Event occurred\" object=\"pvc-0e2b595c-818c-4ad5-8e23-b251746158fb\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0dea2f227751a855a\\\" since volume is currently attached to \\\"i-04eb7a9acdc53fb9b\\\"\"\nI1002 13:56:53.588352       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-0e2b595c-818c-4ad5-8e23-b251746158fb\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0dea2f227751a855a\") on node \"ip-172-20-42-183.ap-southeast-2.compute.internal\" \nI1002 13:56:53.591129       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-0e2b595c-818c-4ad5-8e23-b251746158fb\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0dea2f227751a855a\") on node \"ip-172-20-42-183.ap-southeast-2.compute.internal\" \nI1002 13:56:53.674854       1 pv_controller.go:930] claim \"provisioning-8182/pvc-msfx2\" bound to volume \"local-x56z8\"\nI1002 13:56:53.675287       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9850/pvc-cghhz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod pvc-volume-tester-9qndx to be scheduled\"\nI1002 13:56:53.679544       1 pv_controller.go:1341] isVolumeReleased[pvc-b7622837-7b54-4324-86c4-88c21796a514]: volume is released\nI1002 13:56:53.683472       1 pv_controller.go:1341] isVolumeReleased[pvc-0e2b595c-818c-4ad5-8e23-b251746158fb]: volume is released\nI1002 13:56:53.686286       1 pv_controller.go:879] volume \"local-x56z8\" entered phase \"Bound\"\nI1002 13:56:53.687768       1 pv_controller.go:982] volume \"local-x56z8\" bound to claim \"provisioning-8182/pvc-msfx2\"\nI1002 13:56:53.694282       1 pv_controller.go:823] claim \"provisioning-8182/pvc-msfx2\" entered phase \"Bound\"\nE1002 13:56:53.747599       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-5798/pvc-2srhl: storageclass.storage.k8s.io \"provisioning-5798\" not found\nI1002 13:56:53.748044       1 event.go:291] \"Event occurred\" object=\"provisioning-5798/pvc-2srhl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5798\\\" not found\"\nI1002 13:56:53.806189       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0dea2f227751a855a: error deleting EBS volume \"vol-0dea2f227751a855a\" since volume is currently attached to \"i-04eb7a9acdc53fb9b\"\nE1002 13:56:53.806323       1 goroutinemap.go:150] Operation for \"delete-pvc-0e2b595c-818c-4ad5-8e23-b251746158fb[cebc607d-7f92-41cc-9feb-5e284fea0e15]\" failed. No retries permitted until 2021-10-02 13:56:54.806303587 +0000 UTC m=+1321.134512983 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0dea2f227751a855a\\\" since volume is currently attached to \\\"i-04eb7a9acdc53fb9b\\\"\"\nI1002 13:56:53.806419       1 event.go:291] \"Event occurred\" object=\"pvc-0e2b595c-818c-4ad5-8e23-b251746158fb\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0dea2f227751a855a\\\" since volume is currently attached to \\\"i-04eb7a9acdc53fb9b\\\"\"\nI1002 13:56:53.878322       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0148edae8bb66633d: error deleting EBS volume \"vol-0148edae8bb66633d\" since volume is currently attached to \"i-05ed96ed47c12c5c8\"\nE1002 13:56:53.878494       1 goroutinemap.go:150] Operation for \"delete-pvc-b7622837-7b54-4324-86c4-88c21796a514[27d3a73d-01b1-42d3-b282-0f2a407b4568]\" failed. No retries permitted until 2021-10-02 13:56:54.878474179 +0000 UTC m=+1321.206683589 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0148edae8bb66633d\\\" since volume is currently attached to \\\"i-05ed96ed47c12c5c8\\\"\"\nI1002 13:56:53.878626       1 event.go:291] \"Event occurred\" object=\"pvc-b7622837-7b54-4324-86c4-88c21796a514\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0148edae8bb66633d\\\" since volume is currently attached to \\\"i-05ed96ed47c12c5c8\\\"\"\nI1002 13:56:53.941345       1 pv_controller.go:879] volume \"local-4djzr\" entered phase \"Available\"\nI1002 13:56:54.298215       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-3274/test-cleanup-controller\" need=1 creating=1\nI1002 13:56:54.307573       1 event.go:291] \"Event occurred\" object=\"deployment-3274/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-controller-8mm2z\"\nI1002 13:56:54.470620       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9850/pvc-cghhz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9850\\\" or manually created by system administrator\"\nI1002 13:56:54.675008       1 pv_controller.go:879] volume \"pvc-dfcfd109-bedc-4c86-b2de-de80841818a3\" entered phase \"Bound\"\nI1002 13:56:54.675045       1 pv_controller.go:982] volume \"pvc-dfcfd109-bedc-4c86-b2de-de80841818a3\" bound to claim \"csi-mock-volumes-9850/pvc-cghhz\"\nI1002 13:56:54.685058       1 pv_controller.go:823] claim \"csi-mock-volumes-9850/pvc-cghhz\" entered phase \"Bound\"\nI1002 13:56:55.104818       1 namespace_controller.go:185] Namespace has been deleted provisioning-5249\nI1002 13:56:55.240166       1 aws.go:2299] Waiting for volume \"vol-0148edae8bb66633d\" state: actual=detaching, desired=detached\nE1002 13:56:55.959100       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1002 13:56:55.967211       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1002 13:56:56.682238       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-7354-6607/default: secrets \"default-token-qt9rj\" is forbidden: unable to create new content in namespace csi-mock-volumes-7354-6607 because it is being terminated\nI1002 13:56:56.688357       1 namespace_controller.go:185] Namespace has been deleted multi-az-2104\nI1002 13:56:56.935417       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9686/csi-hostpathqrz5g\"\nI1002 13:56:56.941015       1 pv_controller.go:640] volume \"pvc-78f32896-add0-463f-a77b-932722738ef8\" is released and reclaim policy \"Delete\" will be executed\nI1002 13:56:56.943214       1 pv_controller.go:879] volume \"pvc-78f32896-add0-463f-a77b-932722738ef8\" entered phase \"Released\"\nI1002 13:56:56.945647       1 pv_controller.go:1341] isVolumeReleased[pvc-78f32896-add0-463f-a77b-932722738ef8]: volume is released\nI1002 13:56:56.963593       1 pv_controller_base.go:505] deletion of claim \"provisioning-9686/csi-hostpathqrz5g\" was already processed\nI1002 13:56:57.084477       1 namespace_controller.go:185] Namespace has been deleted apply-4965\nI1002 13:56:57.322570       1 aws.go:2525] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-10-02 13:56:38 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbo\",\n  InstanceId: \"i-05ed96ed47c12c5c8\",\n  State: \"detaching\",\n  VolumeId: \"vol-0148edae8bb66633d\"\n}\nI1002 13:56:57.322615       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-b7622837-7b54-4324-86c4-88c21796a514\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0148edae8bb66633d\") on node \"ip-172-20-33-188.ap-southeast-2.compute.internal\" \nI1002 13:56:57.448076       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-3274/test-cleanup-deployment-5b4d99b59b\" need=1 creating=1\nI1002 13:56:57.449014       1 event.go:291] \"Event occurred\" object=\"deployment-3274/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-cleanup-deployment-5b4d99b59b to 1\"\nI1002 13:56:57.457166       1 event.go:291] \"Event occurred\" object=\"deployment-3274/test-cleanup-deployment-5b4d99b59b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-deployment-5b4d99b59b-l5q5k\"\nI1002 13:56:57.467266       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-3274/test-cleanup-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-cleanup-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:56:57.543307       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=6 creating=6\nI1002 13:56:57.543595       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-847dcfb7fb to 6\"\nI1002 13:56:57.557094       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:56:57.557474       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-kc89s\"\nI1002 13:56:57.565361       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-tkhpj\"\nI1002 13:56:57.569903       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-24c5f\"\nI1002 13:56:57.583120       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-fb8wf\"\nI1002 13:56:57.583451       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-dbc4c\"\nI1002 13:56:57.583924       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-zpfzb\"\nE1002 13:56:57.912793       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:56:57.922740       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=7 creating=1\nI1002 13:56:57.923649       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-847dcfb7fb to 7\"\nI1002 13:56:57.929632       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-w7q9b\"\nE1002 13:56:58.164390       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-4827/default: secrets \"default-token-fvqfb\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4827 because it is being terminated\nI1002 13:56:58.253704       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4827/pod-93f6135d-b306-462b-92af-f896ed810fc7\" PVC=\"persistent-local-volumes-test-4827/pvc-f6xgz\"\nI1002 13:56:58.253735       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4827/pvc-f6xgz\"\nI1002 13:56:58.259741       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-4827/pvc-f6xgz\"\nI1002 13:56:58.270827       1 pv_controller.go:640] volume \"local-pvpd7lb\" is released and reclaim policy \"Retain\" will be executed\nI1002 13:56:58.275844       1 pv_controller.go:879] volume \"local-pvpd7lb\" entered phase \"Released\"\nI1002 13:56:58.279565       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-4827/pvc-f6xgz\" was already processed\nI1002 13:56:58.373365       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 2\"\nI1002 13:56:58.373642       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=2 creating=2\nI1002 13:56:58.380168       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-h6sz4\"\nI1002 13:56:58.385542       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=6 deleting=1\nI1002 13:56:58.385673       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:56:58.390631       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-7806/webserver-847dcfb7fb-kc89s\"\nI1002 13:56:58.392824       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 6\"\nI1002 13:56:58.393768       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-s7jcl\"\nI1002 13:56:58.407691       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:56:58.413408       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-kc89s\"\nI1002 13:56:58.419925       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-6584b976d5\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:56:58.426331       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=3 creating=1\nI1002 13:56:58.427339       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 3\"\nI1002 13:56:58.439647       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-xt8h5\"\nE1002 13:56:58.610001       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-4477/default: secrets \"default-token-j8whj\" is forbidden: unable to create new content in namespace kubectl-4477 because it is being terminated\nI1002 13:56:58.755558       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 1\"\nI1002 13:56:58.771302       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-847dcfb7fb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:56:58.784632       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:56:59.011067       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-0e2b595c-818c-4ad5-8e23-b251746158fb\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0dea2f227751a855a\") on node \"ip-172-20-42-183.ap-southeast-2.compute.internal\" \nE1002 13:56:59.027766       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-8968/default: secrets \"default-token-6rgmm\" is forbidden: unable to create new content in namespace emptydir-8968 because it is being terminated\nI1002 13:56:59.722971       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 2\"\nI1002 13:56:59.736011       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-6584b976d5\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:56:59.753369       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:56:59.836079       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=5 deleting=1\nI1002 13:56:59.836222       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-6584b976d5 webserver-847dcfb7fb]\nI1002 13:56:59.836403       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-7806/webserver-847dcfb7fb-dbc4c\"\nI1002 13:56:59.839057       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 5\"\nI1002 13:56:59.858209       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=4 creating=1\nI1002 13:56:59.861374       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 4\"\nI1002 13:56:59.867668       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-dbc4c\"\nI1002 13:56:59.876625       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-lkbmp\"\nI1002 13:57:00.112725       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=6 creating=1\nI1002 13:57:00.118227       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-847dcfb7fb to 6\"\nI1002 13:57:00.125354       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-n5tpf\"\nI1002 13:57:00.125898       1 event.go:291] \"Event occurred\" object=\"cronjob-2399/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27219717\"\nI1002 13:57:00.180899       1 cronjob_controllerv2.go:193] \"error cleaning up jobs\" cronjob=\"cronjob-2399/concurrent\" resourceVersion=\"38439\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"concurrent\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1002 13:57:00.180921       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-2399/concurrent, requeuing: Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI1002 13:57:00.181563       1 event.go:291] \"Event occurred\" object=\"cronjob-2399/concurrent-27219717\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27219717-l2gmp\"\nI1002 13:57:00.198468       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 5\"\nE1002 13:57:00.210809       1 replica_set.go:532] sync \"deployment-7806/webserver-847dcfb7fb\" failed with Operation cannot be fulfilled on replicasets.apps \"webserver-847dcfb7fb\": the object has been modified; please apply your changes to the latest version and try again\nI1002 13:57:00.210948       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=5 deleting=1\nI1002 13:57:00.210978       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:57:00.211092       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-7806/webserver-847dcfb7fb-n5tpf\"\nI1002 13:57:00.218066       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=5 creating=1\nI1002 13:57:00.219055       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 5\"\nI1002 13:57:00.223325       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-n5tpf\"\nI1002 13:57:00.241722       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-jqw4m\"\nI1002 13:57:00.336961       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-3274/test-cleanup-controller\" need=0 deleting=1\nI1002 13:57:00.337119       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-3274/test-cleanup-controller\" relatedReplicaSets=[test-cleanup-controller test-cleanup-deployment-5b4d99b59b]\nI1002 13:57:00.337248       1 controller_utils.go:602] \"Deleting pod\" controller=\"test-cleanup-controller\" pod=\"deployment-3274/test-cleanup-controller-8mm2z\"\nI1002 13:57:00.346761       1 event.go:291] \"Event occurred\" object=\"deployment-3274/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-cleanup-controller to 0\"\nE1002 13:57:00.412463       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:00.518431       1 event.go:291] \"Event occurred\" object=\"deployment-3274/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-cleanup-controller-8mm2z\"\nE1002 13:57:00.556223       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1002 13:57:00.799631       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-2941/default: secrets \"default-token-fp4rt\" is forbidden: unable to create new content in namespace ephemeral-2941 because it is being terminated\nE1002 13:57:00.805632       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:00.828729       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=5 creating=1\nI1002 13:57:00.851406       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-8szpj\"\nI1002 13:57:00.885461       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=4 deleting=1\nI1002 13:57:00.885493       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:57:00.885728       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 4\"\nI1002 13:57:00.885956       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-7806/webserver-847dcfb7fb-tkhpj\"\nI1002 13:57:00.908181       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=6 creating=1\nI1002 13:57:00.911410       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 6\"\nI1002 13:57:00.925961       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-hvfsb\"\nI1002 13:57:00.946034       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-tkhpj\"\nI1002 13:57:01.034212       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=6 creating=1\nI1002 13:57:01.124697       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-wpzp9\"\nI1002 13:57:01.160816       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6512^4\") on node \"ip-172-20-49-155.ap-southeast-2.compute.internal\" \nI1002 13:57:01.164694       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6512^4\") on node \"ip-172-20-49-155.ap-southeast-2.compute.internal\" \nI1002 13:57:01.572115       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=6 creating=2\nI1002 13:57:01.575070       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=3 deleting=1\nI1002 13:57:01.575491       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:57:01.575661       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 3\"\nI1002 13:57:01.577100       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-7806/webserver-847dcfb7fb-zpfzb\"\nI1002 13:57:01.585277       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 7\"\nI1002 13:57:01.588334       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:01.625036       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-fgsfx\"\nI1002 13:57:01.675352       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-zpfzb\"\nI1002 13:57:01.721857       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-dd9r9\"\nI1002 13:57:01.730573       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6512^4\") on node \"ip-172-20-49-155.ap-southeast-2.compute.internal\" \nI1002 13:57:01.743874       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-6512/pvc-9clqd\"\nI1002 13:57:01.750451       1 pv_controller.go:640] volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" is released and reclaim policy \"Delete\" will be executed\nI1002 13:57:01.754168       1 pv_controller.go:879] volume \"pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8\" entered phase \"Released\"\nI1002 13:57:01.755748       1 pv_controller.go:1341] isVolumeReleased[pvc-f0c3bd2b-9018-42d8-a9df-9af9f40032f8]: volume is released\nI1002 13:57:01.774616       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-6512/pvc-9clqd\" was already processed\nI1002 13:57:01.823870       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=3 creating=1\nI1002 13:57:01.827526       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7354-6607\nI1002 13:57:01.965811       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-2399/concurrent-27219717\" objectUID=8291bd5b-6a32-45b8-9933-42c3f5c4db0f kind=\"Job\" virtual=false\nI1002 13:57:01.966208       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-2399/concurrent-27219716\" objectUID=cabee0e0-be07-45ee-bb9d-82045d6a43bb kind=\"Job\" virtual=false\nI1002 13:57:01.981506       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-md4vl\"\nI1002 13:57:01.994865       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-2399/concurrent-27219717\" objectUID=8291bd5b-6a32-45b8-9933-42c3f5c4db0f kind=\"Job\" propagationPolicy=Background\nI1002 13:57:01.996874       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-2399/concurrent-27219716\" objectUID=cabee0e0-be07-45ee-bb9d-82045d6a43bb kind=\"Job\" propagationPolicy=Background\nI1002 13:57:02.000142       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-2399/concurrent-27219717-l2gmp\" objectUID=599a0a5e-db15-41cb-81bc-f7e76b411043 kind=\"Pod\" virtual=false\nI1002 13:57:02.001531       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-2399/concurrent-27219716-rw276\" objectUID=03faeb7d-336a-4a83-a299-333c96e2f075 kind=\"Pod\" virtual=false\nI1002 13:57:02.010280       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-2399/concurrent-27219717-l2gmp\" objectUID=599a0a5e-db15-41cb-81bc-f7e76b411043 kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:02.010614       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-2399/concurrent-27219716-rw276\" objectUID=03faeb7d-336a-4a83-a299-333c96e2f075 kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:02.067951       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-3887/pvc-9zrwv\"\nI1002 13:57:02.075905       1 pv_controller.go:640] volume \"pvc-e0da8c37-d595-4e92-bc01-c38be2045b85\" is released and reclaim policy \"Delete\" will be executed\nI1002 13:57:02.082805       1 pv_controller.go:879] volume \"pvc-e0da8c37-d595-4e92-bc01-c38be2045b85\" entered phase \"Released\"\nI1002 13:57:02.084112       1 pv_controller.go:1341] isVolumeReleased[pvc-e0da8c37-d595-4e92-bc01-c38be2045b85]: volume is released\nI1002 13:57:02.094254       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-3887/pvc-9zrwv\" was already processed\nI1002 13:57:02.124315       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=3 creating=1\nI1002 13:57:02.170902       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=7 creating=1\nI1002 13:57:02.288391       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-gxfsg\"\nI1002 13:57:02.324997       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-679xt\"\nE1002 13:57:02.438957       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1002 13:57:02.689833       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9686/default: secrets \"default-token-bffcs\" is forbidden: unable to create new content in namespace provisioning-9686 because it is being terminated\nI1002 13:57:03.195049       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8182/pvc-msfx2\"\nI1002 13:57:03.208686       1 pv_controller.go:640] volume \"local-x56z8\" is released and reclaim policy \"Retain\" will be executed\nI1002 13:57:03.211771       1 pv_controller.go:879] volume \"local-x56z8\" entered phase \"Released\"\nI1002 13:57:03.391029       1 pv_controller_base.go:505] deletion of claim \"provisioning-8182/pvc-msfx2\" was already processed\nI1002 13:57:03.595024       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0da9a5d8b44b56006\") on node \"ip-172-20-42-183.ap-southeast-2.compute.internal\" \nI1002 13:57:03.603406       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0da9a5d8b44b56006\") on node \"ip-172-20-42-183.ap-southeast-2.compute.internal\" \nI1002 13:57:03.681555       1 namespace_controller.go:185] Namespace has been deleted kubectl-4477\nI1002 13:57:03.876280       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=2 deleting=1\nI1002 13:57:03.876385       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:57:03.876497       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-7806/webserver-847dcfb7fb-gxfsg\"\nI1002 13:57:03.876742       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 2\"\nI1002 13:57:03.888887       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 8\"\nI1002 13:57:03.889264       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=8 creating=1\nI1002 13:57:03.890317       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-gxfsg\"\nI1002 13:57:03.905169       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-ql98h\"\nI1002 13:57:03.955746       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 1\"\nI1002 13:57:03.957843       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=1 deleting=1\nI1002 13:57:03.957970       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:57:03.958217       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-7806/webserver-847dcfb7fb-md4vl\"\nI1002 13:57:03.966857       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:03.967483       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-md4vl\"\nI1002 13:57:03.982214       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:04.109909       1 namespace_controller.go:185] Namespace has been deleted emptydir-8968\nI1002 13:57:05.440644       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 0\"\nI1002 13:57:05.440964       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" need=0 deleting=1\nI1002 13:57:05.441104       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:57:05.441259       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-7806/webserver-847dcfb7fb-24c5f\"\nI1002 13:57:05.449738       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:05.458491       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-24c5f\"\nI1002 13:57:05.847010       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-4729/pvc-kwlfw\"\nI1002 13:57:05.852226       1 pv_controller.go:640] volume \"pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47\" is released and reclaim policy \"Delete\" will be executed\nI1002 13:57:05.858523       1 pv_controller.go:879] volume \"pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47\" entered phase \"Released\"\nI1002 13:57:05.859860       1 pv_controller.go:1341] isVolumeReleased[pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47]: volume is released\nI1002 13:57:05.906633       1 namespace_controller.go:185] Namespace has been deleted ephemeral-2941\nI1002 13:57:06.055408       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-5874/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1002 13:57:06.056141       1 event.go:291] \"Event occurred\" object=\"webhook-5874/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1002 13:57:06.064187       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-southeast-2a/vol-0da9a5d8b44b56006: error deleting EBS volume \"vol-0da9a5d8b44b56006\" since volume is currently attached to \"i-04eb7a9acdc53fb9b\"\nE1002 13:57:06.064246       1 goroutinemap.go:150] Operation for \"delete-pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47[0982b743-4d46-478b-9549-ef7f3a9977b2]\" failed. No retries permitted until 2021-10-02 13:57:06.564226772 +0000 UTC m=+1332.892436176 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0da9a5d8b44b56006\\\" since volume is currently attached to \\\"i-04eb7a9acdc53fb9b\\\"\"\nI1002 13:57:06.064567       1 event.go:291] \"Event occurred\" object=\"pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0da9a5d8b44b56006\\\" since volume is currently attached to \\\"i-04eb7a9acdc53fb9b\\\"\"\nI1002 13:57:06.067376       1 event.go:291] \"Event occurred\" object=\"webhook-5874/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-9qnr5\"\nI1002 13:57:06.071519       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-5874/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1002 13:57:06.383348       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:06.567634       1 namespace_controller.go:185] Namespace has been deleted kubectl-3503\nI1002 13:57:06.796722       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2941-236/csi-hostpath-attacher\nI1002 13:57:06.796730       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpath-attacher-7c5686c967\" objectUID=81e1cc87-8f23-4070-af03-5f4b622381c1 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:06.797096       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpath-attacher-0\" objectUID=773c5f66-9356-4ce3-ad84-0643e173a81e kind=\"Pod\" virtual=false\nI1002 13:57:06.798712       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpath-attacher-7c5686c967\" objectUID=81e1cc87-8f23-4070-af03-5f4b622381c1 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:06.799128       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpath-attacher-0\" objectUID=773c5f66-9356-4ce3-ad84-0643e173a81e kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:07.182293       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpathplugin-qxpgd\" objectUID=492f11db-b89c-473d-967f-17247f1530ee kind=\"EndpointSlice\" virtual=false\nI1002 13:57:07.187924       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpathplugin-qxpgd\" objectUID=492f11db-b89c-473d-967f-17247f1530ee kind=\"EndpointSlice\" propagationPolicy=Background\nI1002 13:57:07.384126       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpathplugin-5b8f7cd98f\" objectUID=5b88ac25-e9be-4ceb-9d5b-3b8f160137d8 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:07.384797       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2941-236/csi-hostpathplugin\nI1002 13:57:07.384969       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpathplugin-0\" objectUID=a7a334ae-1a2d-49fd-ade8-27114701ffd8 kind=\"Pod\" virtual=false\nI1002 13:57:07.387607       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpathplugin-5b8f7cd98f\" objectUID=5b88ac25-e9be-4ceb-9d5b-3b8f160137d8 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:07.388416       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpathplugin-0\" objectUID=a7a334ae-1a2d-49fd-ade8-27114701ffd8 kind=\"Pod\" propagationPolicy=Background\nE1002 13:57:07.410918       1 tokens_controller.go:262] error synchronizing serviceaccount cronjob-2399/default: serviceaccounts \"default\" not found\nI1002 13:57:07.580023       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpath-provisioner-56b7d898d6\" objectUID=2af6cf52-1358-4d27-8ae4-4782363c5da0 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:07.580121       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2941-236/csi-hostpath-provisioner\nI1002 13:57:07.580208       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpath-provisioner-0\" objectUID=672be44d-dd33-4da1-ae1e-94b038711aa3 kind=\"Pod\" virtual=false\nI1002 13:57:07.584952       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpath-provisioner-56b7d898d6\" objectUID=2af6cf52-1358-4d27-8ae4-4782363c5da0 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:07.585219       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpath-provisioner-0\" objectUID=672be44d-dd33-4da1-ae1e-94b038711aa3 kind=\"Pod\" propagationPolicy=Background\nE1002 13:57:07.690237       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-3887/default: secrets \"default-token-5pqjt\" is forbidden: unable to create new content in namespace csi-mock-volumes-3887 because it is being terminated\nI1002 13:57:07.716069       1 namespace_controller.go:185] Namespace has been deleted provisioning-9686\nI1002 13:57:08.077588       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpath-resizer-fcb5f6469\" objectUID=4695fee6-b9c7-4119-b494-a07277af6d9f kind=\"ControllerRevision\" virtual=false\nI1002 13:57:08.077721       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2941-236/csi-hostpath-resizer\nI1002 13:57:08.077793       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpath-resizer-0\" objectUID=89b7db2b-852c-4414-8487-7d4481e3c62b kind=\"Pod\" virtual=false\nI1002 13:57:08.080012       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpath-resizer-fcb5f6469\" objectUID=4695fee6-b9c7-4119-b494-a07277af6d9f kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:08.080367       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpath-resizer-0\" objectUID=89b7db2b-852c-4414-8487-7d4481e3c62b kind=\"Pod\" propagationPolicy=Background\nE1002 13:57:08.269495       1 tokens_controller.go:262] error synchronizing serviceaccount deployment-3274/default: secrets \"default-token-ppn5v\" is forbidden: unable to create new content in namespace deployment-3274 because it is being terminated\nI1002 13:57:08.271055       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpath-snapshotter-85f7746dc\" objectUID=d548a46f-8219-4cb9-acb6-2850fb22ea00 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:08.271217       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-2941-236/csi-hostpath-snapshotter\nI1002 13:57:08.271272       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2941-236/csi-hostpath-snapshotter-0\" objectUID=ce2d04a8-9954-476d-8de4-3633b3dc07b6 kind=\"Pod\" virtual=false\nI1002 13:57:08.272997       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpath-snapshotter-0\" objectUID=ce2d04a8-9954-476d-8de4-3633b3dc07b6 kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:08.273288       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2941-236/csi-hostpath-snapshotter-85f7746dc\" objectUID=d548a46f-8219-4cb9-acb6-2850fb22ea00 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:08.358408       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-3274/test-cleanup-deployment-5b4d99b59b\" need=1 creating=1\nI1002 13:57:08.421862       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-3274/test-cleanup-deployment\"\nI1002 13:57:08.443126       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-4827\nI1002 13:57:08.626316       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-7330\nI1002 13:57:08.675377       1 pv_controller.go:930] claim \"provisioning-5798/pvc-2srhl\" bound to volume \"local-4djzr\"\nI1002 13:57:08.679743       1 pv_controller.go:1341] isVolumeReleased[pvc-0e2b595c-818c-4ad5-8e23-b251746158fb]: volume is released\nI1002 13:57:08.681669       1 pv_controller.go:1341] isVolumeReleased[pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47]: volume is released\nI1002 13:57:08.682502       1 pv_controller.go:1341] isVolumeReleased[pvc-b7622837-7b54-4324-86c4-88c21796a514]: volume is released\nI1002 13:57:08.683091       1 pv_controller.go:879] volume \"local-4djzr\" entered phase \"Bound\"\nI1002 13:57:08.683854       1 pv_controller.go:982] volume \"local-4djzr\" bound to claim \"provisioning-5798/pvc-2srhl\"\nI1002 13:57:08.690337       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpath-attacher-75f98bf49b\" objectUID=6c1becc4-2615-4e82-88bd-8e98ce7f3c65 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:08.690702       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9686-8726/csi-hostpath-attacher\nI1002 13:57:08.690843       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpath-attacher-0\" objectUID=0f84af9a-bbc8-42a9-a276-219886a9a040 kind=\"Pod\" virtual=false\nI1002 13:57:08.694979       1 pv_controller.go:823] claim \"provisioning-5798/pvc-2srhl\" entered phase \"Bound\"\nI1002 13:57:08.695343       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpath-attacher-75f98bf49b\" objectUID=6c1becc4-2615-4e82-88bd-8e98ce7f3c65 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:08.695421       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpath-attacher-0\" objectUID=0f84af9a-bbc8-42a9-a276-219886a9a040 kind=\"Pod\" propagationPolicy=Background\nE1002 13:57:08.808031       1 tokens_controller.go:262] error synchronizing serviceaccount apf-935/default: secrets \"default-token-9dgz7\" is forbidden: unable to create new content in namespace apf-935 because it is being terminated\nI1002 13:57:08.847758       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-2a/vol-0dea2f227751a855a\nI1002 13:57:08.847941       1 pv_controller.go:1436] volume \"pvc-0e2b595c-818c-4ad5-8e23-b251746158fb\" deleted\nI1002 13:57:08.857331       1 pv_controller_base.go:505] deletion of claim \"volume-1499/awswqvmb\" was already processed\nI1002 13:57:08.901196       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-2a/vol-0da9a5d8b44b56006\nI1002 13:57:08.901218       1 pv_controller.go:1436] volume \"pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47\" deleted\nI1002 13:57:08.902962       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-southeast-2a/vol-0148edae8bb66633d\nI1002 13:57:08.903077       1 pv_controller.go:1436] volume \"pvc-b7622837-7b54-4324-86c4-88c21796a514\" deleted\nI1002 13:57:08.912990       1 pv_controller_base.go:505] deletion of claim \"provisioning-4729/pvc-kwlfw\" was already processed\nI1002 13:57:08.917170       1 pv_controller_base.go:505] deletion of claim \"provisioning-6097/aws98brb\" was already processed\nI1002 13:57:08.972396       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=7 deleting=1\nI1002 13:57:08.972443       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:57:08.972550       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-ql98h\"\nI1002 13:57:08.973365       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 7\"\nI1002 13:57:08.981930       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-ql98h\"\nI1002 13:57:08.987769       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=2 creating=2\nI1002 13:57:08.988000       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7ff7669dd9 to 2\"\nI1002 13:57:08.999475       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7ff7669dd9-f2mjx\"\nI1002 13:57:09.005811       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:09.009308       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7ff7669dd9-8nfvn\"\nI1002 13:57:09.012098       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-dd4c41eb-ae74-45ee-a1cb-2e49052a0b47\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-southeast-2a/vol-0da9a5d8b44b56006\") on node \"ip-172-20-42-183.ap-southeast-2.compute.internal\" \nI1002 13:57:09.024940       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=6 deleting=1\nI1002 13:57:09.025700       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:09.025509       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 6\"\nI1002 13:57:09.029662       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-wpzp9\"\nI1002 13:57:09.049379       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7ff7669dd9 to 3\"\nI1002 13:57:09.063577       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-wpzp9\"\nE1002 13:57:09.068431       1 replica_set.go:532] sync \"deployment-7806/webserver-7ff7669dd9\" failed with Operation cannot be fulfilled on replicasets.apps \"webserver-7ff7669dd9\": the object has been modified; please apply your changes to the latest version and try again\nI1002 13:57:09.069553       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=3 creating=1\nI1002 13:57:09.099777       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpathplugin-p7qmm\" objectUID=f4c8ec99-5bfb-4b54-aa9a-ab1f8e030b66 kind=\"EndpointSlice\" virtual=false\nI1002 13:57:09.112120       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7ff7669dd9-cvpb2\"\nI1002 13:57:09.136219       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpathplugin-p7qmm\" objectUID=f4c8ec99-5bfb-4b54-aa9a-ab1f8e030b66 kind=\"EndpointSlice\" propagationPolicy=Background\nI1002 13:57:09.341247       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9686-8726/csi-hostpathplugin\nI1002 13:57:09.341249       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpathplugin-668889b794\" objectUID=25ac8dfd-32d0-482e-a256-35289255209e kind=\"ControllerRevision\" virtual=false\nI1002 13:57:09.341464       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpathplugin-0\" objectUID=3ef0ff28-285a-49a8-a4da-8121ce28e45a kind=\"Pod\" virtual=false\nI1002 13:57:09.343136       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpathplugin-668889b794\" objectUID=25ac8dfd-32d0-482e-a256-35289255209e kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:09.344087       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpathplugin-0\" objectUID=3ef0ff28-285a-49a8-a4da-8121ce28e45a kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:09.530578       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpath-provisioner-55b7f6d67f\" objectUID=5ac3b849-744b-44cd-9802-1b1d18098ffc kind=\"ControllerRevision\" virtual=false\nI1002 13:57:09.530912       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9686-8726/csi-hostpath-provisioner\nI1002 13:57:09.531154       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpath-provisioner-0\" objectUID=9b7d5109-becb-4376-9163-278cec877019 kind=\"Pod\" virtual=false\nI1002 13:57:09.533145       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpath-provisioner-0\" objectUID=9b7d5109-becb-4376-9163-278cec877019 kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:09.533470       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpath-provisioner-55b7f6d67f\" objectUID=5ac3b849-744b-44cd-9802-1b1d18098ffc kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:09.721631       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpath-resizer-585f46b646\" objectUID=8b1c0787-8611-4ecf-b1de-9c5f34a40b45 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:09.722065       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9686-8726/csi-hostpath-resizer\nI1002 13:57:09.722225       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpath-resizer-0\" objectUID=b4217881-db65-4d0a-9d59-81bb0058cae3 kind=\"Pod\" virtual=false\nI1002 13:57:09.723814       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpath-resizer-0\" objectUID=b4217881-db65-4d0a-9d59-81bb0058cae3 kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:09.725725       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpath-resizer-585f46b646\" objectUID=8b1c0787-8611-4ecf-b1de-9c5f34a40b45 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:09.911806       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpath-snapshotter-6c48bc5bbd\" objectUID=07660117-2d52-4296-b1a3-8f7668ed4f68 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:09.912088       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9686-8726/csi-hostpath-snapshotter\nI1002 13:57:09.912119       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9686-8726/csi-hostpath-snapshotter-0\" objectUID=f8d89756-7491-475b-99a6-1afe1d40ed5d kind=\"Pod\" virtual=false\nI1002 13:57:09.914394       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpath-snapshotter-6c48bc5bbd\" objectUID=07660117-2d52-4296-b1a3-8f7668ed4f68 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:09.915242       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9686-8726/csi-hostpath-snapshotter-0\" objectUID=f8d89756-7491-475b-99a6-1afe1d40ed5d kind=\"Pod\" propagationPolicy=Background\nE1002 13:57:10.159496       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:10.171754       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=5 deleting=1\nI1002 13:57:10.172077       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:10.172537       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-hvfsb\"\nI1002 13:57:10.172043       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 5\"\nI1002 13:57:10.190464       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7ff7669dd9 to 4\"\nI1002 13:57:10.195183       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=4 creating=1\nI1002 13:57:10.198250       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-hvfsb\"\nI1002 13:57:10.214576       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7ff7669dd9-wkcfl\"\nI1002 13:57:10.240033       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:10.389438       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=4 deleting=1\nI1002 13:57:10.389620       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:10.390064       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-679xt\"\nI1002 13:57:10.390570       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 4\"\nI1002 13:57:10.410890       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=5 creating=1\nI1002 13:57:10.411920       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7ff7669dd9 to 5\"\nI1002 13:57:10.418933       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7ff7669dd9-8lz5d\"\nI1002 13:57:10.426336       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:10.426816       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-679xt\"\nI1002 13:57:10.629159       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [webhook.example.com/v1, Resource=e2e-test-webhook-6477-crds], removed: [stable.example.com/v2, Resource=e2e-test-crd-webhook-2506-crds]\nI1002 13:57:10.633182       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-webhook-6477-crds.webhook.example.com\nI1002 13:57:10.633314       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI1002 13:57:10.733414       1 shared_informer.go:247] Caches are synced for resource quota \nI1002 13:57:10.733503       1 resource_quota_controller.go:454] synced quota controller\nI1002 13:57:11.011957       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [mygroup.example.com/v1beta1, Resource=fooxn4smas webhook.example.com/v1, Resource=e2e-test-webhook-6477-crds], removed: [stable.example.com/v2, Resource=e2e-test-crd-webhook-2506-crds]\nI1002 13:57:11.045977       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI1002 13:57:11.049331       1 graph_builder.go:587] add [mygroup.example.com/v1beta1/fooxn4sma, namespace: , name: canary5hbkl, uid: 4e400e35-7673-4bce-912b-15c223780091] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1002 13:57:11.147270       1 shared_informer.go:247] Caches are synced for garbage collector \nI1002 13:57:11.147471       1 garbagecollector.go:254] synced garbage collector\nI1002 13:57:11.147602       1 garbagecollector.go:471] \"Processing object\" object=\"ownerjxnt9\" objectUID=6c4799c7-c996-45e9-8abc-cee9f75b9be4 kind=\"fooxn4sma\" virtual=true\nI1002 13:57:11.148000       1 garbagecollector.go:471] \"Processing object\" object=\"canary5hbkl\" objectUID=4e400e35-7673-4bce-912b-15c223780091 kind=\"fooxn4sma\" virtual=false\nI1002 13:57:11.161988       1 garbagecollector.go:471] \"Processing object\" object=\"dependentgnjnx\" objectUID=7c8bf578-8c0d-43d8-90f6-b48faf39d6d6 kind=\"fooxn4sma\" virtual=false\nI1002 13:57:11.162357       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [mygroup.example.com/v1beta1/fooxn4sma, namespace: , name: canary5hbkl, uid: 4e400e35-7673-4bce-912b-15c223780091]\nI1002 13:57:11.172247       1 garbagecollector.go:580] \"Deleting object\" object=\"dependentgnjnx\" objectUID=7c8bf578-8c0d-43d8-90f6-b48faf39d6d6 kind=\"fooxn4sma\" propagationPolicy=Background\nI1002 13:57:11.393328       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=3 deleting=1\nI1002 13:57:11.393578       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:11.393949       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-s7jcl\"\nI1002 13:57:11.394985       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 3\"\nI1002 13:57:11.412180       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-s7jcl\"\nI1002 13:57:11.420107       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=6 creating=1\nI1002 13:57:11.421173       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7ff7669dd9 to 6\"\nI1002 13:57:11.436803       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7ff7669dd9-ltf7n\"\nI1002 13:57:11.525350       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:12.028053       1 controller.go:400] Ensuring load balancer for service deployment-8344/test-rolling-update-with-lb\nI1002 13:57:12.028122       1 aws.go:3901] EnsureLoadBalancer(e2e-de872154ff-19973.test-cncf-aws.k8s.io, deployment-8344, test-rolling-update-with-lb, ap-southeast-2, , [{ TCP <nil> 80 {0 80 } 31624}], map[])\nI1002 13:57:12.028744       1 event.go:291] \"Event occurred\" object=\"deployment-8344/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI1002 13:57:12.342845       1 aws.go:3122] Existing security group ingress: sg-008ccf64fef292f04 [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI1002 13:57:12.404643       1 aws_loadbalancer.go:1185] Creating additional load balancer tags for a9ae45656c8114168abb7b8ba1c0124f\nI1002 13:57:12.426292       1 aws_loadbalancer.go:1212] Updating load-balancer attributes for \"a9ae45656c8114168abb7b8ba1c0124f\"\nI1002 13:57:12.600856       1 pv_controller.go:879] volume \"local-pvtjz8l\" entered phase \"Available\"\nI1002 13:57:12.630239       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6512\nI1002 13:57:12.644286       1 aws.go:4520] Adding rule for traffic from the load balancer (sg-008ccf64fef292f04) to instances (sg-0f6a234837a11bb17)\nI1002 13:57:12.702354       1 aws.go:3197] Existing security group ingress: sg-0f6a234837a11bb17 [{\n  FromPort: 30000,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 32767\n} {\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-0d9bef12d43d02391\",\n      UserId: \"768319786644\"\n    },{\n      GroupId: \"sg-0f6a234837a11bb17\",\n      UserId: \"768319786644\"\n    }]\n} {\n  FromPort: 22,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"34.70.68.82/32\"\n    }],\n  ToPort: 22\n} {\n  FromPort: 30000,\n  IpProtocol: \"udp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 32767\n}]\nI1002 13:57:12.702445       1 aws.go:3094] Comparing sg-008ccf64fef292f04 to sg-0d9bef12d43d02391\nI1002 13:57:12.702453       1 aws.go:3094] Comparing sg-008ccf64fef292f04 to sg-0f6a234837a11bb17\nI1002 13:57:12.702558       1 aws.go:3225] Adding security group ingress: sg-0f6a234837a11bb17 [{\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-008ccf64fef292f04\"\n    }]\n}]\nI1002 13:57:12.793552       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-3887\nI1002 13:57:12.794019       1 pv_controller.go:930] claim \"persistent-local-volumes-test-7472/pvc-fvxds\" bound to volume \"local-pvtjz8l\"\nI1002 13:57:12.810010       1 pv_controller.go:879] volume \"local-pvtjz8l\" entered phase \"Bound\"\nI1002 13:57:12.810225       1 pv_controller.go:982] volume \"local-pvtjz8l\" bound to claim \"persistent-local-volumes-test-7472/pvc-fvxds\"\nI1002 13:57:12.817924       1 pv_controller.go:823] claim \"persistent-local-volumes-test-7472/pvc-fvxds\" entered phase \"Bound\"\nI1002 13:57:13.023523       1 aws_loadbalancer.go:1460] Instances added to load-balancer a9ae45656c8114168abb7b8ba1c0124f\nI1002 13:57:13.023553       1 aws.go:4286] Loadbalancer a9ae45656c8114168abb7b8ba1c0124f (deployment-8344/test-rolling-update-with-lb) has DNS name a9ae45656c8114168abb7b8ba1c0124f-64569078.ap-southeast-2.elb.amazonaws.com\nI1002 13:57:13.023594       1 controller.go:942] Patching status for service deployment-8344/test-rolling-update-with-lb\nI1002 13:57:13.023927       1 event.go:291] \"Event occurred\" object=\"deployment-8344/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuredLoadBalancer\" message=\"Ensured load balancer\"\nI1002 13:57:13.440458       1 namespace_controller.go:185] Namespace has been deleted deployment-3274\nE1002 13:57:13.861027       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-2941-236/default: secrets \"default-token-tn2t9\" is forbidden: unable to create new content in namespace ephemeral-2941-236 because it is being terminated\nE1002 13:57:13.910981       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:13.930180       1 namespace_controller.go:185] Namespace has been deleted apf-935\nE1002 13:57:13.931980       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:14.243007       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-6512-8748/csi-mockplugin\nI1002 13:57:14.243112       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-89db497b8\" objectUID=b8a24984-b7d4-44bf-9b03-57a21f66b5d2 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:14.243219       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-0\" objectUID=86d1707d-7f6a-4cf2-8b46-52338ea6f0ef kind=\"Pod\" virtual=false\nI1002 13:57:14.245523       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-0\" objectUID=86d1707d-7f6a-4cf2-8b46-52338ea6f0ef kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:14.245663       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-89db497b8\" objectUID=b8a24984-b7d4-44bf-9b03-57a21f66b5d2 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:14.625253       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-attacher-7968cddb6d\" objectUID=8f756ea0-2177-47ea-850e-8796f019a7df kind=\"ControllerRevision\" virtual=false\nI1002 13:57:14.625554       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-6512-8748/csi-mockplugin-attacher\nI1002 13:57:14.625600       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-attacher-0\" objectUID=f8fa5888-3f9b-48a3-b3aa-98661a68109f kind=\"Pod\" virtual=false\nI1002 13:57:14.629638       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-attacher-0\" objectUID=f8fa5888-3f9b-48a3-b3aa-98661a68109f kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:14.629914       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6512-8748/csi-mockplugin-attacher-7968cddb6d\" objectUID=8f756ea0-2177-47ea-850e-8796f019a7df kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:14.639591       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-3887-8468/csi-mockplugin-8df8bbb46\" objectUID=32118982-04a0-41c7-ac2f-0c73d95aed63 kind=\"ControllerRevision\" virtual=false\nI1002 13:57:14.639839       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-3887-8468/csi-mockplugin\nI1002 13:57:14.639881       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-3887-8468/csi-mockplugin-0\" objectUID=46d0dbd7-2a32-4915-a689-5315109dce56 kind=\"Pod\" virtual=false\nI1002 13:57:14.643186       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-3887-8468/csi-mockplugin-8df8bbb46\" objectUID=32118982-04a0-41c7-ac2f-0c73d95aed63 kind=\"ControllerRevision\" propagationPolicy=Background\nI1002 13:57:14.643444       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-3887-8468/csi-mockplugin-0\" objectUID=46d0dbd7-2a32-4915-a689-5315109dce56 kind=\"Pod\" propagationPolicy=Background\nI1002 13:57:15.057765       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=2 deleting=1\nI1002 13:57:15.057970       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:15.058290       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 2\"\nI1002 13:57:15.058308       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-8szpj\"\nI1002 13:57:15.067745       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=7 creating=1\nI1002 13:57:15.072426       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7ff7669dd9 to 7\"\nI1002 13:57:15.080766       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-8szpj\"\nI1002 13:57:15.085393       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7ff7669dd9-6mhrb\"\nI1002 13:57:15.375312       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=1 deleting=1\nI1002 13:57:15.375464       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:15.375650       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-fgsfx\"\nI1002 13:57:15.381853       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 1\"\nI1002 13:57:15.407433       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-fgsfx\"\nI1002 13:57:15.658607       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=0 deleting=1\nI1002 13:57:15.658840       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-6584b976d5 webserver-7ff7669dd9 webserver-847dcfb7fb]\nI1002 13:57:15.658984       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-dd9r9\"\nI1002 13:57:15.660317       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 0\"\nI1002 13:57:15.679490       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:15.696908       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-dd9r9\"\nI1002 13:57:15.747009       1 namespace_controller.go:185] Namespace has been deleted tables-5475\nE1002 13:57:15.845547       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9686-8726/default: secrets \"default-token-4wfrn\" is forbidden: unable to create new content in namespace provisioning-9686-8726 because it is being terminated\nI1002 13:57:16.018419       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-5874/e2e-test-webhook-892rc\" objectUID=6a72d586-62f3-4273-965a-33172b51d4fa kind=\"EndpointSlice\" virtual=false\nE1002 13:57:16.054355       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:16.054536       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-5874/e2e-test-webhook-892rc\" objectUID=6a72d586-62f3-4273-965a-33172b51d4fa kind=\"EndpointSlice\" propagationPolicy=Background\nI1002 13:57:16.297267       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5798/pvc-2srhl\"\nI1002 13:57:16.321752       1 pv_controller.go:640] volume \"local-4djzr\" is released and reclaim policy \"Retain\" will be executed\nI1002 13:57:16.332146       1 pv_controller.go:879] volume \"local-4djzr\" entered phase \"Released\"\nI1002 13:57:16.372034       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-5874/sample-webhook-deployment\"\nI1002 13:57:16.372089       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-5874/sample-webhook-deployment-78988fc6cd\" objectUID=500a6c8f-a18a-4b71-9b7e-fc1500ccecdd kind=\"ReplicaSet\" virtual=false\nI1002 13:57:16.377958       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-5874/sample-webhook-deployment-78988fc6cd\" objectUID=500a6c8f-a18a-4b71-9b7e-fc1500ccecdd kind=\"ReplicaSet\" propagationPolicy=Background\nI1002 13:57:16.391820       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-5874/sample-webhook-deployment-78988fc6cd-9qnr5\" objectUID=098248e9-ff67-4506-b14b-151ff1919350 kind=\"Pod\" virtual=false\nI1002 13:57:16.398354       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-5874/sample-webhook-deployment-78988fc6cd-9qnr5\" objectUID=098248e9-ff67-4506-b14b-151ff1919350 kind=\"Pod\" propagationPolicy=Background\nE1002 13:57:16.457045       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-6097/default: secrets \"default-token-c62zv\" is forbidden: unable to create new content in namespace provisioning-6097 because it is being terminated\nI1002 13:57:16.487271       1 pv_controller_base.go:505] deletion of claim \"provisioning-5798/pvc-2srhl\" was already processed\nE1002 13:57:16.893454       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:16.916308       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-1114/rc-test\" need=1 creating=1\nI1002 13:57:16.921680       1 event.go:291] \"Event occurred\" object=\"replication-controller-1114/rc-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rc-test-7zv59\"\nI1002 13:57:17.190436       1 namespace_controller.go:185] Namespace has been deleted provisioning-8182\nE1002 13:57:17.247484       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:17.377222       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-9850/pvc-cghhz\"\nI1002 13:57:17.383733       1 pv_controller.go:640] volume \"pvc-dfcfd109-bedc-4c86-b2de-de80841818a3\" is released and reclaim policy \"Delete\" will be executed\nI1002 13:57:17.386131       1 pv_controller.go:879] volume \"pvc-dfcfd109-bedc-4c86-b2de-de80841818a3\" entered phase \"Released\"\nI1002 13:57:17.387518       1 pv_controller.go:1341] isVolumeReleased[pvc-dfcfd109-bedc-4c86-b2de-de80841818a3]: volume is released\nI1002 13:57:17.603804       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-9850/pvc-cghhz\" was already processed\nI1002 13:57:17.914512       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 4\"\nI1002 13:57:17.930433       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-7388\nI1002 13:57:17.942943       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-6584b976d5\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:17.948344       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=2 creating=2\nI1002 13:57:17.949741       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 2\"\nI1002 13:57:17.955667       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-5ld8z\"\nI1002 13:57:17.955892       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:17.963652       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7ff7669dd9 to 6\"\nI1002 13:57:17.963840       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=6 deleting=1\nI1002 13:57:17.963901       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:17.964014       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-7ff7669dd9\" pod=\"deployment-7806/webserver-7ff7669dd9-6mhrb\"\nI1002 13:57:17.967658       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-79rk5\"\nI1002 13:57:17.976599       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 3\"\nI1002 13:57:17.985530       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=3 creating=1\nI1002 13:57:17.992313       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-sslzk\"\nI1002 13:57:17.992333       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7ff7669dd9-6mhrb\"\nI1002 13:57:18.360940       1 namespace_controller.go:185] Namespace has been deleted kubectl-746\nI1002 13:57:19.189457       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-1114/rc-test\" need=2 creating=1\nI1002 13:57:19.197035       1 event.go:291] \"Event occurred\" object=\"replication-controller-1114/rc-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rc-test-v749v\"\nI1002 13:57:19.421347       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7ff7669dd9 to 5\"\nI1002 13:57:19.422054       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=5 deleting=1\nI1002 13:57:19.422222       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:19.422404       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-7ff7669dd9\" pod=\"deployment-7806/webserver-7ff7669dd9-8lz5d\"\nI1002 13:57:19.432044       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=4 creating=1\nI1002 13:57:19.432617       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 4\"\nI1002 13:57:19.440546       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:19.441356       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7ff7669dd9-8lz5d\"\nI1002 13:57:19.446397       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-xhxvl\"\nE1002 13:57:19.897729       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6512-8748/default: secrets \"default-token-cc5jd\" is forbidden: unable to create new content in namespace csi-mock-volumes-6512-8748 because it is being terminated\nE1002 13:57:19.920211       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1002 13:57:20.222202       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE1002 13:57:20.613515       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-7507/default: secrets \"default-token-ltvw8\" is forbidden: unable to create new content in namespace downward-api-7507 because it is being terminated\nE1002 13:57:20.861000       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-5874-markers/default: secrets \"default-token-hn6m7\" is forbidden: unable to create new content in namespace webhook-5874-markers because it is being terminated\nI1002 13:57:21.072367       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7ff7669dd9 to 4\"\nI1002 13:57:21.072781       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=4 deleting=1\nI1002 13:57:21.073917       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:21.074323       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-7ff7669dd9\" pod=\"deployment-7806/webserver-7ff7669dd9-ltf7n\"\nI1002 13:57:21.102582       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=5 creating=1\nI1002 13:57:21.107005       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7ff7669dd9-ltf7n\"\nI1002 13:57:21.107459       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 5\"\nI1002 13:57:21.117977       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-4wn6g\"\nE1002 13:57:21.140769       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:21.249079       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7ff7669dd9 to 3\"\nI1002 13:57:21.249381       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=3 deleting=1\nI1002 13:57:21.249491       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:21.249665       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-7ff7669dd9\" pod=\"deployment-7806/webserver-7ff7669dd9-cvpb2\"\nI1002 13:57:21.257636       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=6 creating=1\nI1002 13:57:21.258366       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 6\"\nE1002 13:57:21.274459       1 tokens_controller.go:262] error synchronizing serviceaccount server-version-6623/default: secrets \"default-token-bqngl\" is forbidden: unable to create new content in namespace server-version-6623 because it is being terminated\nI1002 13:57:21.275077       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7ff7669dd9-cvpb2\"\nI1002 13:57:21.275099       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-9hrkh\"\nI1002 13:57:21.514520       1 namespace_controller.go:185] Namespace has been deleted provisioning-6097\nE1002 13:57:22.139387       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:22.227905       1 namespace_controller.go:185] Namespace has been deleted provisioning-4729\nI1002 13:57:22.356226       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=2 deleting=1\nI1002 13:57:22.356261       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:22.356468       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-7ff7669dd9\" pod=\"deployment-7806/webserver-7ff7669dd9-wkcfl\"\nI1002 13:57:22.356728       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7ff7669dd9 to 2\"\nI1002 13:57:22.366804       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=7 creating=1\nI1002 13:57:22.366896       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6584b976d5 to 7\"\nI1002 13:57:22.371216       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:22.372703       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-bwvzd\"\nI1002 13:57:22.380285       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7ff7669dd9-wkcfl\"\nE1002 13:57:22.503539       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:22.828961       1 event.go:291] \"Event occurred\" object=\"job-3293/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed-n4mv5\"\nI1002 13:57:22.845403       1 event.go:291] \"Event occurred\" object=\"job-3293/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed-bzz5w\"\nE1002 13:57:22.854777       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:22.998998       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-2691/agnhost-primary\" need=1 creating=1\nI1002 13:57:23.007221       1 event.go:291] \"Event occurred\" object=\"kubectl-2691/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-ltfn5\"\nI1002 13:57:23.232978       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-1114/rc-test\" objectUID=075dad0c-c1b2-40b5-8b33-a91cf99c753a kind=\"ReplicationController\" virtual=false\nI1002 13:57:23.236029       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-1114/rc-test\" objectUID=075dad0c-c1b2-40b5-8b33-a91cf99c753a kind=\"ReplicationController\" virtual=false\nE1002 13:57:23.381727       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-4029/pvc-rthbw: storageclass.storage.k8s.io \"provisioning-4029\" not found\nI1002 13:57:23.382024       1 event.go:291] \"Event occurred\" object=\"provisioning-4029/pvc-rthbw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4029\\\" not found\"\nE1002 13:57:23.408655       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5798/default: secrets \"default-token-bdpz2\" is forbidden: unable to create new content in namespace provisioning-5798 because it is being terminated\nE1002 13:57:23.472484       1 pv_controller.go:1452] error finding provisioning plugin for claim volume-120/pvc-qs2tx: storageclass.storage.k8s.io \"volume-120\" not found\nI1002 13:57:23.473081       1 event.go:291] \"Event occurred\" object=\"volume-120/pvc-qs2tx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-120\\\" not found\"\nI1002 13:57:23.579514       1 pv_controller.go:879] volume \"local-2m47z\" entered phase \"Available\"\nI1002 13:57:23.665121       1 pv_controller.go:879] volume \"local-jnnxz\" entered phase \"Available\"\nI1002 13:57:23.675652       1 pv_controller.go:930] claim \"volume-120/pvc-qs2tx\" bound to volume \"local-jnnxz\"\nI1002 13:57:23.685670       1 pv_controller.go:879] volume \"local-jnnxz\" entered phase \"Bound\"\nI1002 13:57:23.687673       1 pv_controller.go:982] volume \"local-jnnxz\" bound to claim \"volume-120/pvc-qs2tx\"\nI1002 13:57:23.692052       1 pv_controller.go:823] claim \"volume-120/pvc-qs2tx\" entered phase \"Bound\"\nI1002 13:57:23.692213       1 pv_controller.go:930] claim \"provisioning-4029/pvc-rthbw\" bound to volume \"local-2m47z\"\nI1002 13:57:23.698608       1 pv_controller.go:879] volume \"local-2m47z\" entered phase \"Bound\"\nI1002 13:57:23.698748       1 pv_controller.go:982] volume \"local-2m47z\" bound to claim \"provisioning-4029/pvc-rthbw\"\nI1002 13:57:23.706275       1 pv_controller.go:823] claim \"provisioning-4029/pvc-rthbw\" entered phase \"Bound\"\nI1002 13:57:24.000469       1 namespace_controller.go:185] Namespace has been deleted ephemeral-2941-236\nI1002 13:57:24.158464       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=4 deleting=3\nI1002 13:57:24.158583       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-7ff7669dd9 webserver-5cdd6dcbd7 webserver-847dcfb7fb webserver-6584b976d5]\nI1002 13:57:24.158794       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-9hrkh\"\nI1002 13:57:24.159077       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-bwvzd\"\nI1002 13:57:24.159333       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-4wn6g\"\nI1002 13:57:24.161528       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 4\"\nI1002 13:57:24.166278       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:24.178254       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-5cdd6dcbd7\" need=3 creating=3\nI1002 13:57:24.178729       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-bwvzd\"\nI1002 13:57:24.178753       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-5cdd6dcbd7 to 3\"\nE1002 13:57:24.181624       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:24.185772       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-4wn6g\"\nI1002 13:57:24.189783       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-9hrkh\"\nI1002 13:57:24.190450       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-5cdd6dcbd7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5cdd6dcbd7-h7d67\"\nE1002 13:57:24.195348       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:24.212734       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-5cdd6dcbd7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5cdd6dcbd7-d7kqz\"\nI1002 13:57:24.214593       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7806/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1002 13:57:24.216346       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-5cdd6dcbd7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5cdd6dcbd7-djxqt\"\nE1002 13:57:24.452165       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7343/default: secrets \"default-token-lqqrl\" is forbidden: unable to create new content in namespace provisioning-7343 because it is being terminated\nI1002 13:57:24.609362       1 namespace_controller.go:185] Namespace has been deleted volume-1499\nE1002 13:57:24.798993       1 tokens_controller.go:262] error synchronizing serviceaccount gc-701/default: secrets \"default-token-w6sr9\" is forbidden: unable to create new content in namespace gc-701 because it is being terminated\nI1002 13:57:24.867348       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=4 creating=1\nI1002 13:57:24.878134       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6584b976d5-c9t9s\"\nI1002 13:57:25.067200       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=2 creating=1\nI1002 13:57:25.110361       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7ff7669dd9-k7wf5\"\nI1002 13:57:25.398654       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6512-8748\nE1002 13:57:25.399790       1 tokens_controller.go:262] error synchronizing serviceaccount pod-network-test-3377/default: secrets \"default-token-r6lfd\" is forbidden: unable to create new content in namespace pod-network-test-3377 because it is being terminated\nI1002 13:57:25.944194       1 namespace_controller.go:185] Namespace has been deleted downward-api-7507\nI1002 13:57:26.191267       1 namespace_controller.go:185] Namespace has been deleted webhook-5874\nI1002 13:57:26.232340       1 namespace_controller.go:185] Namespace has been deleted webhook-5874-markers\nE1002 13:57:26.328419       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1002 13:57:26.369387       1 namespace_controller.go:185] Namespace has been deleted server-version-6623\nI1002 13:57:26.418280       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-6584b976d5\" need=3 deleting=1\nI1002 13:57:26.418370       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-6584b976d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9 webserver-5cdd6dcbd7]\nI1002 13:57:26.418510       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-6584b976d5\" pod=\"deployment-7806/webserver-6584b976d5-c9t9s\"\nI1002 13:57:26.418848       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6584b976d5 to 3\"\nI1002 13:57:26.431348       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-6584b976d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6584b976d5-c9t9s\"\nI1002 13:57:26.463279       1 namespace_controller.go:185] Namespace has been deleted provisioning-9686-8726\nI1002 13:57:26.799455       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-5cdd6dcbd7\" need=4 creating=1\nI1002 13:57:26.800300       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-5cdd6dcbd7 to 4\"\nI1002 13:57:26.807478       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-5cdd6dcbd7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5cdd6dcbd7-5flm4\"\nI1002 13:57:27.000700       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-2895/httpd-deployment\"\nI1002 13:57:27.181294       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-5cdd6dcbd7\" need=3 deleting=1\nI1002 13:57:27.181428       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-5cdd6dcbd7\" relatedReplicaSets=[webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9 webserver-5cdd6dcbd7]\nI1002 13:57:27.181637       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-5cdd6dcbd7\" pod=\"deployment-7806/webserver-5cdd6dcbd7-5flm4\"\nI1002 13:57:27.182352       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-5cdd6dcbd7 to 3\"\nI1002 13:57:27.191803       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-5cdd6dcbd7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5cdd6dcbd7-5flm4\"\nI1002 13:57:27.640870       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" need=1 deleting=1\nI1002 13:57:27.641252       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-7806/webserver-7ff7669dd9\" relatedReplicaSets=[webserver-5cdd6dcbd7 webserver-847dcfb7fb webserver-6584b976d5 webserver-7ff7669dd9]\nI1002 13:57:27.641430       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-7ff7669dd9\" pod=\"deployment-7806/webserver-7ff7669dd9-k7wf5\"\nI1002 13:57:27.642185       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7ff7669dd9 to 1\"\nI1002 13:57:27.650128       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7806/webserver-5cdd6dcbd7\" need=4 creating=1\nI1002 13:57:27.650797       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-5cdd6dcbd7 to 4\"\nI1002 13:57:27.657642       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-7ff7669dd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7ff7669dd9-k7wf5\"\nI1002 13:57:27.678538       1 event.go:291] \"Event occurred\" object=\"deployment-7806/webserver-5cdd6dcbd7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5cdd6dcbd7-ztpz6\"\nI1002 13:57:28.455709       1 namespace_controller.go:185] Namespace has been deleted provisioning-5798\nE1002 13:57:28.514999       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-37-133.ap-southeast-2.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-33-188.ap-southeast-2.compute.internal ====\nI1002 13:36:33.830902       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI1002 13:36:33.833001       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI1002 13:36:33.833165       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI1002 13:36:33.833362       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI1002 13:36:33.833456       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI1002 13:36:33.833539       1 flags.go:59] FLAG: --cleanup=\"false\"\nI1002 13:36:33.833615       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI1002 13:36:33.833693       1 flags.go:59] FLAG: --config=\"\"\nI1002 13:36:33.833742       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI1002 13:36:33.833816       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI1002 13:36:33.833868       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI1002 13:36:33.833920       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI1002 13:36:33.833953       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI1002 13:36:33.834043       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI1002 13:36:33.834085       1 flags.go:59] FLAG: --feature-gates=\"\"\nI1002 13:36:33.834139       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI1002 13:36:33.834179       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI1002 13:36:33.834555       1 flags.go:59] FLAG: --help=\"false\"\nI1002 13:36:33.834678       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-33-188.ap-southeast-2.compute.internal\"\nI1002 13:36:33.834726       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI1002 13:36:33.834792       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI1002 13:36:33.834848       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI1002 13:36:33.834878       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI1002 13:36:33.834966       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI1002 13:36:33.835019       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI1002 13:36:33.835057       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI1002 13:36:33.835119       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI1002 13:36:33.835181       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI1002 13:36:33.835496       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI1002 13:36:33.835562       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI1002 13:36:33.835600       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI1002 13:36:33.835670       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI1002 13:36:33.835748       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI1002 13:36:33.835835       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI1002 13:36:33.835897       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI1002 13:36:33.835930       1 flags.go:59] FLAG: --log-dir=\"\"\nI1002 13:36:33.835950       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI1002 13:36:33.836190       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI1002 13:36:33.840343       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI1002 13:36:33.840366       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI1002 13:36:33.840374       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI1002 13:36:33.840381       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI1002 13:36:33.840387       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-de872154ff-19973.test-cncf-aws.k8s.io\"\nI1002 13:36:33.840394       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI1002 13:36:33.840401       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI1002 13:36:33.840412       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI1002 13:36:33.840439       1 flags.go:59] FLAG: --one-output=\"false\"\nI1002 13:36:33.840445       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI1002 13:36:33.840452       1 flags.go:59] FLAG: --profiling=\"false\"\nI1002 13:36:33.840457       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI1002 13:36:33.840468       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI1002 13:36:33.840476       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI1002 13:36:33.840488       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI1002 13:36:33.840493       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI1002 13:36:33.840499       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI1002 13:36:33.840505       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI1002 13:36:33.840510       1 flags.go:59] FLAG: --v=\"2\"\nI1002 13:36:33.840516       1 flags.go:59] FLAG: --version=\"false\"\nI1002 13:36:33.840526       1 flags.go:59] FLAG: --vmodule=\"\"\nI1002 13:36:33.840538       1 flags.go:59] FLAG: --write-config-to=\"\"\nW1002 13:36:33.840565       1 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI1002 13:36:33.840713       1 feature_gate.go:243] feature gates: &{map[]}\nI1002 13:36:33.840903       1 feature_gate.go:243] feature gates: &{map[]}\nI1002 13:36:33.889897       1 node.go:172] Successfully retrieved node IP: 172.20.33.188\nI1002 13:36:33.889932       1 server_others.go:140] Detected node IP 172.20.33.188\nW1002 13:36:33.889992       1 server_others.go:598] Unknown proxy mode \"\", assuming iptables proxy\nI1002 13:36:33.890203       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI1002 13:36:33.940458       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI1002 13:36:33.940492       1 server_others.go:212] Using iptables Proxier.\nI1002 13:36:33.940507       1 server_others.go:219] creating dualStackProxier for iptables.\nW1002 13:36:33.940522       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI1002 13:36:33.940622       1 utils.go:375] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI1002 13:36:33.940700       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI1002 13:36:33.940754       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1002 13:36:33.940801       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv4\nI1002 13:36:33.940856       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI1002 13:36:33.940895       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1002 13:36:33.940912       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv6\nI1002 13:36:33.941084       1 server.go:643] Version: v1.21.5\nI1002 13:36:33.942072       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1002 13:36:33.942140       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1002 13:36:33.942184       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1002 13:36:33.942699       1 config.go:315] Starting service config controller\nI1002 13:36:33.942766       1 shared_informer.go:240] Waiting for caches to sync for service config\nI1002 13:36:33.943749       1 config.go:224] Starting endpoint slice config controller\nI1002 13:36:33.943817       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI1002 13:36:33.945692       1 service.go:306] Service kube-system/kube-dns updated: 3 ports\nI1002 13:36:33.945813       1 service.go:306] Service default/kubernetes updated: 1 ports\nW1002 13:36:33.946011       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW1002 13:36:33.947557       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1002 13:36:34.043000       1 shared_informer.go:247] Caches are synced for service config \nI1002 13:36:34.043152       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1002 13:36:34.043322       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1002 13:36:34.044394       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI1002 13:36:34.044570       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI1002 13:36:34.044671       1 service.go:421] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI1002 13:36:34.044689       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI1002 13:36:34.044702       1 service.go:421] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI1002 13:36:34.044793       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:36:34.106574       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.0084ms\"\nI1002 13:36:34.106613       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:36:34.141552       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.940592ms\"\nI1002 13:37:15.846608       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI1002 13:37:15.846788       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:37:15.918273       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"71.783326ms\"\nI1002 13:37:15.918396       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:37:15.967627       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.306646ms\"\nI1002 13:37:17.735416       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:37:17.803602       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.22537ms\"\nI1002 13:37:18.806407       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:37:18.872188       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.880498ms\"\nI1002 13:40:02.432688       1 service.go:306] Service services-8289/tolerate-unready updated: 1 ports\nI1002 13:40:02.432779       1 service.go:421] Adding new service port \"services-8289/tolerate-unready:http\" at 100.70.94.78:80/TCP\nI1002 13:40:02.432822       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:02.623075       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"190.317689ms\"\nI1002 13:40:02.623199       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:02.775830       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"152.648068ms\"\nI1002 13:40:03.663995       1 service.go:306] Service services-2368/nodeport-collision-1 updated: 1 ports\nI1002 13:40:03.664048       1 service.go:421] Adding new service port \"services-2368/nodeport-collision-1\" at 100.68.222.32:80/TCP\nI1002 13:40:03.664092       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:03.699918       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-2368/nodeport-collision-1\\\" (:32592/tcp4)\"\nI1002 13:40:03.706886       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.829265ms\"\nI1002 13:40:04.050015       1 service.go:306] Service services-2368/nodeport-collision-1 updated: 0 ports\nI1002 13:40:04.255928       1 service.go:306] Service services-2368/nodeport-collision-2 updated: 1 ports\nI1002 13:40:04.450279       1 service.go:446] Removing service port \"services-2368/nodeport-collision-1\"\nI1002 13:40:04.450681       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:04.506620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.341909ms\"\nI1002 13:40:05.508328       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:05.592386       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.130607ms\"\nI1002 13:40:08.857197       1 service.go:306] Service ephemeral-2177-4926/csi-hostpathplugin updated: 1 ports\nI1002 13:40:08.857262       1 service.go:421] Adding new service port \"ephemeral-2177-4926/csi-hostpathplugin:dummy\" at 100.68.190.212:12345/TCP\nI1002 13:40:08.857320       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:08.937471       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.130847ms\"\nI1002 13:40:08.937801       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:09.002444       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.924141ms\"\nI1002 13:40:09.538135       1 service.go:306] Service webhook-6380/e2e-test-webhook updated: 1 ports\nI1002 13:40:10.002722       1 service.go:421] Adding new service port \"webhook-6380/e2e-test-webhook\" at 100.71.239.80:8443/TCP\nI1002 13:40:10.002899       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:10.049863       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.158938ms\"\nI1002 13:40:11.774444       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:11.831473       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.072943ms\"\nI1002 13:40:14.296703       1 service.go:306] Service webhook-4076/e2e-test-webhook updated: 1 ports\nI1002 13:40:14.296803       1 service.go:421] Adding new service port \"webhook-4076/e2e-test-webhook\" at 100.71.156.226:8443/TCP\nI1002 13:40:14.297004       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:14.348495       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.734625ms\"\nI1002 13:40:14.348689       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:14.404061       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.516464ms\"\nI1002 13:40:15.749489       1 service.go:306] Service webhook-6380/e2e-test-webhook updated: 0 ports\nI1002 13:40:15.749528       1 service.go:446] Removing service port \"webhook-6380/e2e-test-webhook\"\nI1002 13:40:15.749578       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:15.796840       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.292713ms\"\nI1002 13:40:16.797115       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:16.841451       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.450069ms\"\nI1002 13:40:19.573516       1 service.go:306] Service webhook-4076/e2e-test-webhook updated: 0 ports\nI1002 13:40:19.573574       1 service.go:446] Removing service port \"webhook-4076/e2e-test-webhook\"\nI1002 13:40:19.573640       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:19.624268       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.681186ms\"\nI1002 13:40:19.624359       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:19.714813       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"90.472145ms\"\nI1002 13:40:20.719330       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:20.769681       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.388509ms\"\nI1002 13:40:24.434072       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:24.604374       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"170.39842ms\"\nI1002 13:40:26.792140       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:26.850369       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.260029ms\"\nI1002 13:40:27.542689       1 service.go:306] Service services-8289/tolerate-unready updated: 0 ports\nI1002 13:40:27.542739       1 service.go:446] Removing service port \"services-8289/tolerate-unready:http\"\nI1002 13:40:27.542796       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:27.658652       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"115.896465ms\"\nI1002 13:40:28.659415       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:28.720172       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.816913ms\"\nI1002 13:40:32.470317       1 service.go:306] Service ephemeral-1287-6716/csi-hostpathplugin updated: 1 ports\nI1002 13:40:32.470364       1 service.go:421] Adding new service port \"ephemeral-1287-6716/csi-hostpathplugin:dummy\" at 100.68.14.90:12345/TCP\nI1002 13:40:32.470408       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:32.543518       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.143837ms\"\nI1002 13:40:32.543589       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:32.604269       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.706448ms\"\nI1002 13:40:33.177490       1 service.go:306] Service services-8787/nodeport-test updated: 1 ports\nI1002 13:40:33.605063       1 service.go:421] Adding new service port \"services-8787/nodeport-test:http\" at 100.70.161.186:80/TCP\nI1002 13:40:33.605138       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:33.667325       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-8787/nodeport-test:http\\\" (:30291/tcp4)\"\nI1002 13:40:33.673504       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.458834ms\"\nI1002 13:40:35.229120       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:35.346920       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"117.842189ms\"\nI1002 13:40:36.477163       1 service.go:306] Service services-6530/affinity-clusterip-transition updated: 1 ports\nI1002 13:40:36.477217       1 service.go:421] Adding new service port \"services-6530/affinity-clusterip-transition\" at 100.71.90.34:80/TCP\nI1002 13:40:36.477266       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:36.569134       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"91.907686ms\"\nI1002 13:40:36.569206       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:36.651311       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.130019ms\"\nI1002 13:40:37.651528       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:37.721984       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.516592ms\"\nI1002 13:40:38.722283       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:38.786426       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.261265ms\"\nI1002 13:40:39.670577       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:39.729406       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.862272ms\"\nI1002 13:40:40.730472       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:40.821077       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"90.693277ms\"\nI1002 13:40:43.903161       1 service.go:306] Service provisioning-1384-4286/csi-hostpathplugin updated: 1 ports\nI1002 13:40:43.903220       1 service.go:421] Adding new service port \"provisioning-1384-4286/csi-hostpathplugin:dummy\" at 100.71.225.113:12345/TCP\nI1002 13:40:43.903278       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:43.956592       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.36789ms\"\nI1002 13:40:43.956666       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:44.009355       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.718369ms\"\nI1002 13:40:51.088145       1 service.go:306] Service services-6530/affinity-clusterip-transition updated: 1 ports\nI1002 13:40:51.088279       1 service.go:423] Updating existing service port \"services-6530/affinity-clusterip-transition\" at 100.71.90.34:80/TCP\nI1002 13:40:51.088409       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:51.147949       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.651946ms\"\nI1002 13:40:51.271439       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:51.348751       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.339213ms\"\nI1002 13:40:53.556543       1 service.go:306] Service services-6530/affinity-clusterip-transition updated: 1 ports\nI1002 13:40:53.556591       1 service.go:423] Updating existing service port \"services-6530/affinity-clusterip-transition\" at 100.71.90.34:80/TCP\nI1002 13:40:53.557119       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:53.615675       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.071652ms\"\nI1002 13:40:56.468199       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:56.570583       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"102.440463ms\"\nI1002 13:40:57.471138       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:57.544850       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.756374ms\"\nI1002 13:40:58.074332       1 service.go:306] Service conntrack-8083/svc-udp updated: 1 ports\nI1002 13:40:58.074377       1 service.go:421] Adding new service port \"conntrack-8083/svc-udp:udp\" at 100.67.91.190:80/UDP\nI1002 13:40:58.074423       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:58.182422       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"107.834822ms\"\nI1002 13:40:58.574828       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:58.649153       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.38582ms\"\nI1002 13:40:58.675955       1 service.go:306] Service services-8787/nodeport-test updated: 0 ports\nI1002 13:40:59.649320       1 service.go:446] Removing service port \"services-8787/nodeport-test:http\"\nI1002 13:40:59.649454       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:59.738162       1 service.go:306] Service services-3177/sourceip-test updated: 1 ports\nI1002 13:40:59.738969       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"89.654311ms\"\nI1002 13:41:00.739152       1 service.go:421] Adding new service port \"services-3177/sourceip-test\" at 100.64.145.91:8080/TCP\nI1002 13:41:00.739241       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:00.905010       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"165.848533ms\"\nI1002 13:41:02.627414       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:02.942938       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"315.569433ms\"\nI1002 13:41:06.229151       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:06.275337       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.227134ms\"\nI1002 13:41:06.404670       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-8083/svc-udp:udp\" clusterIP=\"100.67.91.190\"\nI1002 13:41:06.404769       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:06.465461       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.990686ms\"\nI1002 13:41:10.319516       1 service.go:306] Service services-6530/affinity-clusterip-transition updated: 0 ports\nI1002 13:41:10.319565       1 service.go:446] Removing service port \"services-6530/affinity-clusterip-transition\"\nI1002 13:41:10.319621       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:10.368771       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.191929ms\"\nI1002 13:41:10.368917       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:10.432066       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.248538ms\"\nI1002 13:41:17.593639       1 service.go:306] Service webhook-5500/e2e-test-webhook updated: 1 ports\nI1002 13:41:17.593927       1 service.go:421] Adding new service port \"webhook-5500/e2e-test-webhook\" at 100.65.252.177:8443/TCP\nI1002 13:41:17.594058       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:17.667885       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.958098ms\"\nI1002 13:41:17.668108       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:17.732145       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.209897ms\"\nI1002 13:41:20.365208       1 service.go:306] Service webhook-5500/e2e-test-webhook updated: 0 ports\nI1002 13:41:20.365257       1 service.go:446] Removing service port \"webhook-5500/e2e-test-webhook\"\nI1002 13:41:20.365460       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:20.427517       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.236671ms\"\nI1002 13:41:20.427620       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:20.489478       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.901962ms\"\nI1002 13:41:21.489704       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:21.543773       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.17429ms\"\nI1002 13:41:22.496520       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:22.551101       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.625681ms\"\nI1002 13:41:22.682571       1 service.go:306] Service services-3177/sourceip-test updated: 0 ports\nI1002 13:41:23.551253       1 service.go:446] Removing service port \"services-3177/sourceip-test\"\nI1002 13:41:23.551509       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:23.611539       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.275038ms\"\nI1002 13:41:33.759276       1 service.go:306] Service provisioning-5029-9776/csi-hostpathplugin updated: 1 ports\nI1002 13:41:33.759482       1 service.go:421] Adding new service port \"provisioning-5029-9776/csi-hostpathplugin:dummy\" at 100.71.108.225:12345/TCP\nI1002 13:41:33.759601       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:33.818652       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.319778ms\"\nI1002 13:41:33.818791       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:33.873735       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.039626ms\"\nI1002 13:41:34.797461       1 service.go:306] Service ephemeral-2177-4926/csi-hostpathplugin updated: 0 ports\nI1002 13:41:34.797504       1 service.go:446] Removing service port \"ephemeral-2177-4926/csi-hostpathplugin:dummy\"\nI1002 13:41:34.797756       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:34.863480       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.955514ms\"\nI1002 13:41:35.810722       1 service.go:306] Service kubectl-2513/agnhost-primary updated: 1 ports\nI1002 13:41:35.810782       1 service.go:421] Adding new service port \"kubectl-2513/agnhost-primary\" at 100.65.124.0:6379/TCP\nI1002 13:41:35.810856       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:35.865796       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.003754ms\"\nI1002 13:41:36.866774       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:36.948766       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.095849ms\"\nI1002 13:41:38.742293       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:38.812207       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.994451ms\"\nI1002 13:41:38.822447       1 service.go:306] Service conntrack-8083/svc-udp updated: 0 ports\nI1002 13:41:38.822481       1 service.go:446] Removing service port \"conntrack-8083/svc-udp:udp\"\nI1002 13:41:38.822531       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:38.913524       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"91.028823ms\"\nI1002 13:41:44.634286       1 service.go:306] Service kubectl-2513/agnhost-primary updated: 0 ports\nI1002 13:41:44.634330       1 service.go:446] Removing service port \"kubectl-2513/agnhost-primary\"\nI1002 13:41:44.634381       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:44.683476       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.116275ms\"\nI1002 13:41:44.688529       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:44.761888       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.437646ms\"\nI1002 13:41:45.854070       1 service.go:306] Service provisioning-3835-3579/csi-hostpathplugin updated: 1 ports\nI1002 13:41:45.854121       1 service.go:421] Adding new service port \"provisioning-3835-3579/csi-hostpathplugin:dummy\" at 100.68.249.246:12345/TCP\nI1002 13:41:45.854168       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:45.974867       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"120.7353ms\"\nI1002 13:41:46.975805       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:47.171868       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"196.125738ms\"\nI1002 13:41:47.876682       1 service.go:306] Service services-4417/externalname-service updated: 1 ports\nI1002 13:41:47.876727       1 service.go:421] Adding new service port \"services-4417/externalname-service:http\" at 100.68.101.53:80/TCP\nI1002 13:41:47.876779       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:47.966316       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-4417/externalname-service:http\\\" (:31317/tcp4)\"\nI1002 13:41:47.978015       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"101.274165ms\"\nI1002 13:41:48.978481       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:49.033165       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.75072ms\"\nI1002 13:41:49.833837       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:49.889121       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.324175ms\"\nI1002 13:41:50.664392       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:50.746742       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.398126ms\"\nI1002 13:41:51.288416       1 service.go:306] Service conntrack-1597/boom-server updated: 1 ports\nI1002 13:41:51.747140       1 service.go:421] Adding new service port \"conntrack-1597/boom-server\" at 100.68.17.91:9000/TCP\nI1002 13:41:51.747250       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:51.803382       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.262954ms\"\nI1002 13:41:59.868627       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:59.924810       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.225987ms\"\nI1002 13:42:05.182985       1 service.go:306] Service services-4417/externalname-service updated: 0 ports\nI1002 13:42:05.183031       1 service.go:446] Removing service port \"services-4417/externalname-service:http\"\nI1002 13:42:05.183090       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:05.271592       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.542065ms\"\nI1002 13:42:05.271709       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:05.345237       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.591059ms\"\nI1002 13:42:08.423909       1 service.go:306] Service ephemeral-9915-93/csi-hostpathplugin updated: 1 ports\nI1002 13:42:08.423972       1 service.go:421] Adding new service port \"ephemeral-9915-93/csi-hostpathplugin:dummy\" at 100.68.79.32:12345/TCP\nI1002 13:42:08.424035       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:08.533061       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"109.072887ms\"\nI1002 13:42:08.533163       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:08.654092       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"120.962717ms\"\nI1002 13:42:10.471557       1 service.go:306] Service provisioning-1384-4286/csi-hostpathplugin updated: 0 ports\nI1002 13:42:10.471598       1 service.go:446] Removing service port \"provisioning-1384-4286/csi-hostpathplugin:dummy\"\nI1002 13:42:10.471657       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:10.598415       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"126.78654ms\"\nI1002 13:42:10.598511       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:10.733157       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"134.688984ms\"\nI1002 13:42:12.952200       1 service.go:306] Service ephemeral-5508-1475/csi-hostpathplugin updated: 1 ports\nI1002 13:42:12.952263       1 service.go:421] Adding new service port \"ephemeral-5508-1475/csi-hostpathplugin:dummy\" at 100.64.126.131:12345/TCP\nI1002 13:42:12.952504       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:13.029604       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.334721ms\"\nI1002 13:42:13.029801       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:13.113967       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.305894ms\"\nI1002 13:42:20.959958       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:21.233882       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"273.967394ms\"\nI1002 13:42:21.233970       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:21.366275       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"132.340998ms\"\nI1002 13:42:32.889962       1 service.go:306] Service provisioning-3835-3579/csi-hostpathplugin updated: 0 ports\nI1002 13:42:32.889999       1 service.go:446] Removing service port \"provisioning-3835-3579/csi-hostpathplugin:dummy\"\nI1002 13:42:32.890059       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:33.315513       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"425.494281ms\"\nI1002 13:42:33.315609       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:33.425319       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"109.75964ms\"\nI1002 13:42:33.549048       1 service.go:306] Service provisioning-5029-9776/csi-hostpathplugin updated: 0 ports\nI1002 13:42:34.426173       1 service.go:446] Removing service port \"provisioning-5029-9776/csi-hostpathplugin:dummy\"\nI1002 13:42:34.426363       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:34.633878       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"207.704677ms\"\nW1002 13:42:45.948458       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1002 13:42:51.223740       1 service.go:306] Service provisioning-3514-7135/csi-hostpathplugin updated: 1 ports\nI1002 13:42:51.223811       1 service.go:421] Adding new service port \"provisioning-3514-7135/csi-hostpathplugin:dummy\" at 100.69.85.242:12345/TCP\nI1002 13:42:51.223915       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:51.298662       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.867955ms\"\nI1002 13:42:51.298775       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:51.379035       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.326739ms\"\nI1002 13:42:55.873825       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:56.139876       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"266.106575ms\"\nI1002 13:42:57.814462       1 service.go:306] Service webhook-2359/e2e-test-webhook updated: 1 ports\nI1002 13:42:57.814588       1 service.go:421] Adding new service port \"webhook-2359/e2e-test-webhook\" at 100.66.7.199:8443/TCP\nI1002 13:42:57.814669       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:57.877127       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.558181ms\"\nI1002 13:42:57.877304       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:57.930325       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.152969ms\"\nI1002 13:43:00.029612       1 service.go:306] Service webhook-198/e2e-test-webhook updated: 1 ports\nI1002 13:43:00.029726       1 service.go:421] Adding new service port \"webhook-198/e2e-test-webhook\" at 100.66.78.233:8443/TCP\nI1002 13:43:00.029857       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:00.106629       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.889884ms\"\nI1002 13:43:00.106793       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:00.175204       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.51118ms\"\nI1002 13:43:02.811343       1 service.go:306] Service ephemeral-1287-6716/csi-hostpathplugin updated: 0 ports\nI1002 13:43:02.811390       1 service.go:446] Removing service port \"ephemeral-1287-6716/csi-hostpathplugin:dummy\"\nI1002 13:43:02.811503       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:02.888451       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.047238ms\"\nI1002 13:43:02.888656       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:03.100992       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"212.486667ms\"\nI1002 13:43:03.458114       1 service.go:306] Service webhook-2359/e2e-test-webhook updated: 0 ports\nI1002 13:43:04.101362       1 service.go:446] Removing service port \"webhook-2359/e2e-test-webhook\"\nI1002 13:43:04.101473       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:04.411151       1 service.go:306] Service conntrack-1597/boom-server updated: 0 ports\nI1002 13:43:04.592672       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"491.311233ms\"\nI1002 13:43:05.592904       1 service.go:446] Removing service port \"conntrack-1597/boom-server\"\nI1002 13:43:05.593047       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:05.642561       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.663439ms\"\nI1002 13:43:05.720739       1 service.go:306] Service ephemeral-9915-93/csi-hostpathplugin updated: 0 ports\nI1002 13:43:06.412481       1 service.go:306] Service webhook-198/e2e-test-webhook updated: 0 ports\nI1002 13:43:06.412527       1 service.go:446] Removing service port \"ephemeral-9915-93/csi-hostpathplugin:dummy\"\nI1002 13:43:06.412543       1 service.go:446] Removing service port \"webhook-198/e2e-test-webhook\"\nI1002 13:43:06.412600       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:06.474744       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.176907ms\"\nI1002 13:43:07.475411       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:07.538639       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.270715ms\"\nI1002 13:43:31.219299       1 service.go:306] Service provisioning-3514-7135/csi-hostpathplugin updated: 0 ports\nI1002 13:43:31.219342       1 service.go:446] Removing service port \"provisioning-3514-7135/csi-hostpathplugin:dummy\"\nI1002 13:43:31.219398       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:31.276103       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.742437ms\"\nI1002 13:43:31.276193       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:31.356398       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.242002ms\"\nI1002 13:43:33.620944       1 service.go:306] Service ephemeral-5508-1475/csi-hostpathplugin updated: 0 ports\nI1002 13:43:33.620981       1 service.go:446] Removing service port \"ephemeral-5508-1475/csi-hostpathplugin:dummy\"\nI1002 13:43:33.621029       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:33.882765       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"261.765361ms\"\nI1002 13:43:33.882866       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:34.047023       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"164.191066ms\"\nI1002 13:43:48.433920       1 service.go:306] Service endpointslicemirroring-9855/example-custom-endpoints updated: 1 ports\nI1002 13:43:48.433975       1 service.go:421] Adding new service port \"endpointslicemirroring-9855/example-custom-endpoints:example\" at 100.65.227.182:80/TCP\nI1002 13:43:48.434027       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:48.491226       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.232034ms\"\nI1002 13:43:48.631767       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:48.682414       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.703469ms\"\nI1002 13:43:49.683043       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:49.728936       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.960472ms\"\nI1002 13:43:54.256223       1 service.go:306] Service webhook-1431/e2e-test-webhook updated: 1 ports\nI1002 13:43:54.256271       1 service.go:421] Adding new service port \"webhook-1431/e2e-test-webhook\" at 100.65.253.173:8443/TCP\nI1002 13:43:54.256320       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:54.327843       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"71.550269ms\"\nI1002 13:43:54.327949       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:54.397293       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.400877ms\"\nI1002 13:43:54.999438       1 service.go:306] Service endpointslicemirroring-9855/example-custom-endpoints updated: 0 ports\nI1002 13:43:55.397647       1 service.go:446] Removing service port \"endpointslicemirroring-9855/example-custom-endpoints:example\"\nI1002 13:43:55.397748       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:55.474568       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.928626ms\"\nI1002 13:43:58.152634       1 service.go:306] Service webhook-1431/e2e-test-webhook updated: 0 ports\nI1002 13:43:58.152698       1 service.go:446] Removing service port \"webhook-1431/e2e-test-webhook\"\nI1002 13:43:58.152932       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:58.204446       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.742396ms\"\nI1002 13:43:58.204546       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:58.255958       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.459511ms\"\nI1002 13:44:01.123855       1 service.go:306] Service endpointslice-6601/example-empty-selector updated: 1 ports\nI1002 13:44:01.123905       1 service.go:421] Adding new service port \"endpointslice-6601/example-empty-selector:example\" at 100.67.199.12:80/TCP\nI1002 13:44:01.123945       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:01.339842       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"215.919646ms\"\nI1002 13:44:01.339925       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:01.697216       1 service.go:306] Service endpointslice-6601/example-empty-selector updated: 0 ports\nI1002 13:44:01.811181       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"471.272882ms\"\nI1002 13:44:02.592508       1 service.go:306] Service dns-1986/dns-test-service-3 updated: 1 ports\nI1002 13:44:02.592550       1 service.go:446] Removing service port \"endpointslice-6601/example-empty-selector:example\"\nI1002 13:44:02.592574       1 service.go:421] Adding new service port \"dns-1986/dns-test-service-3:http\" at 100.66.33.113:80/TCP\nI1002 13:44:02.592626       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:02.677479       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.910178ms\"\nI1002 13:44:03.477428       1 service.go:306] Service volume-expand-832-7777/csi-hostpathplugin updated: 1 ports\nI1002 13:44:03.477478       1 service.go:421] Adding new service port \"volume-expand-832-7777/csi-hostpathplugin:dummy\" at 100.69.51.228:12345/TCP\nI1002 13:44:03.477541       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:03.522606       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.109352ms\"\nI1002 13:44:04.523477       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:04.736186       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"212.761592ms\"\nI1002 13:44:06.143318       1 service.go:306] Service webhook-6728/e2e-test-webhook updated: 1 ports\nI1002 13:44:06.143386       1 service.go:421] Adding new service port \"webhook-6728/e2e-test-webhook\" at 100.68.43.120:8443/TCP\nI1002 13:44:06.143544       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:06.208586       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.209703ms\"\nI1002 13:44:06.208686       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:06.267941       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.306336ms\"\nI1002 13:44:06.326769       1 service.go:306] Service dns-1986/dns-test-service-3 updated: 0 ports\nI1002 13:44:06.328198       1 service.go:306] Service provisioning-8816-5719/csi-hostpathplugin updated: 1 ports\nI1002 13:44:07.268091       1 service.go:446] Removing service port \"dns-1986/dns-test-service-3:http\"\nI1002 13:44:07.268147       1 service.go:421] Adding new service port \"provisioning-8816-5719/csi-hostpathplugin:dummy\" at 100.67.197.156:12345/TCP\nI1002 13:44:07.268213       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:07.325168       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.075123ms\"\nI1002 13:44:08.880782       1 service.go:306] Service webhook-6728/e2e-test-webhook updated: 0 ports\nI1002 13:44:08.880821       1 service.go:446] Removing service port \"webhook-6728/e2e-test-webhook\"\nI1002 13:44:08.880871       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:08.948717       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.854589ms\"\nI1002 13:44:09.442395       1 service.go:306] Service ephemeral-9676-9429/csi-hostpathplugin updated: 1 ports\nI1002 13:44:09.442446       1 service.go:421] Adding new service port \"ephemeral-9676-9429/csi-hostpathplugin:dummy\" at 100.71.86.16:12345/TCP\nI1002 13:44:09.442519       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:09.488266       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.801243ms\"\nI1002 13:44:10.275165       1 service.go:306] Service provisioning-9803-3343/csi-hostpathplugin updated: 1 ports\nI1002 13:44:10.275293       1 service.go:421] Adding new service port \"provisioning-9803-3343/csi-hostpathplugin:dummy\" at 100.69.197.213:12345/TCP\nI1002 13:44:10.275415       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:10.336702       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.405169ms\"\nI1002 13:44:11.338359       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:11.518505       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"180.206471ms\"\nI1002 13:44:18.121481       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:18.171073       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.642981ms\"\nI1002 13:44:21.902007       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:21.984447       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.550199ms\"\nI1002 13:44:30.541695       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:30.601372       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.676932ms\"\nI1002 13:44:36.558002       1 service.go:306] Service services-8399/externalname-service updated: 1 ports\nI1002 13:44:36.558051       1 service.go:421] Adding new service port \"services-8399/externalname-service:http\" at 100.70.97.182:80/TCP\nI1002 13:44:36.558108       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:36.718329       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"160.271008ms\"\nI1002 13:44:36.718409       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:36.897872       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"179.489785ms\"\nI1002 13:44:38.723807       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:38.769923       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.143103ms\"\nI1002 13:44:41.950469       1 service.go:306] Service kubectl-8252/agnhost-primary updated: 1 ports\nI1002 13:44:41.950526       1 service.go:421] Adding new service port \"kubectl-8252/agnhost-primary\" at 100.71.107.34:6379/TCP\nI1002 13:44:41.950582       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:42.024285       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.75342ms\"\nI1002 13:44:42.024371       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:42.089855       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.522132ms\"\nI1002 13:44:45.894965       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:45.951508       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.593515ms\"\nI1002 13:44:48.722180       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:48.799070       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.92907ms\"\nI1002 13:44:50.219599       1 service.go:306] Service provisioning-9803-3343/csi-hostpathplugin updated: 0 ports\nI1002 13:44:50.219786       1 service.go:446] Removing service port \"provisioning-9803-3343/csi-hostpathplugin:dummy\"\nI1002 13:44:50.219911       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:50.284435       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.775475ms\"\nI1002 13:44:50.284555       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:50.354140       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.643397ms\"\nI1002 13:44:58.417520       1 service.go:306] Service services-8399/externalname-service updated: 0 ports\nI1002 13:44:58.417713       1 service.go:446] Removing service port \"services-8399/externalname-service:http\"\nI1002 13:44:58.417824       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:58.463444       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.772054ms\"\nI1002 13:44:58.463606       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:44:58.510330       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.842428ms\"\nI1002 13:45:00.084621       1 service.go:306] Service services-4794/endpoint-test2 updated: 1 ports\nI1002 13:45:00.084675       1 service.go:421] Adding new service port \"services-4794/endpoint-test2\" at 100.68.133.158:80/TCP\nI1002 13:45:00.084762       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:00.184088       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"99.401282ms\"\nI1002 13:45:01.184904       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:01.364952       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"180.094914ms\"\nI1002 13:45:01.827118       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:01.977096       1 service.go:306] Service kubectl-8252/agnhost-primary updated: 0 ports\nI1002 13:45:02.079014       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"251.955587ms\"\nI1002 13:45:03.079186       1 service.go:446] Removing service port \"kubectl-8252/agnhost-primary\"\nI1002 13:45:03.079276       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:03.150420       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"71.238113ms\"\nI1002 13:45:05.336208       1 service.go:306] Service provisioning-8816-5719/csi-hostpathplugin updated: 0 ports\nI1002 13:45:05.336259       1 service.go:446] Removing service port \"provisioning-8816-5719/csi-hostpathplugin:dummy\"\nI1002 13:45:05.336319       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:05.521806       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"185.528631ms\"\nI1002 13:45:05.521903       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:05.609711       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"87.85202ms\"\nI1002 13:45:05.977207       1 service.go:306] Service webhook-6616/e2e-test-webhook updated: 1 ports\nI1002 13:45:06.609935       1 service.go:421] Adding new service port \"webhook-6616/e2e-test-webhook\" at 100.68.125.29:8443/TCP\nI1002 13:45:06.610050       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:06.655696       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.789044ms\"\nI1002 13:45:10.118530       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:10.193906       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.440481ms\"\nI1002 13:45:10.249563       1 service.go:306] Service webhook-6616/e2e-test-webhook updated: 0 ports\nI1002 13:45:10.249773       1 service.go:446] Removing service port \"webhook-6616/e2e-test-webhook\"\nI1002 13:45:10.249974       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:10.310409       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.650324ms\"\nI1002 13:45:11.311408       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:11.355692       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.338109ms\"\nI1002 13:45:12.218568       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:12.314897       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"96.363505ms\"\nI1002 13:45:13.229673       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:13.332750       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"103.121896ms\"\nI1002 13:45:13.960115       1 service.go:306] Service services-4794/endpoint-test2 updated: 0 ports\nI1002 13:45:14.333304       1 service.go:446] Removing service port \"services-4794/endpoint-test2\"\nI1002 13:45:14.333472       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:14.379862       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.575906ms\"\nI1002 13:45:17.320625       1 service.go:306] Service ephemeral-9676-9429/csi-hostpathplugin updated: 0 ports\nI1002 13:45:17.320664       1 service.go:446] Removing service port \"ephemeral-9676-9429/csi-hostpathplugin:dummy\"\nI1002 13:45:17.320717       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:17.397532       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.853708ms\"\nI1002 13:45:17.397628       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:17.448812       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.224382ms\"\nI1002 13:45:18.981979       1 service.go:306] Service svc-latency-570/latency-svc-2j2s9 updated: 1 ports\nI1002 13:45:18.982118       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-2j2s9\" at 100.71.79.106:80/TCP\nI1002 13:45:18.982268       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:19.023557       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.433763ms\"\nI1002 13:45:19.188070       1 service.go:306] Service svc-latency-570/latency-svc-sbcv7 updated: 1 ports\nI1002 13:45:19.202080       1 service.go:306] Service svc-latency-570/latency-svc-b79x6 updated: 1 ports\nI1002 13:45:19.210135       1 service.go:306] Service svc-latency-570/latency-svc-8xfx8 updated: 1 ports\nI1002 13:45:19.385843       1 service.go:306] Service svc-latency-570/latency-svc-gskrc updated: 1 ports\nI1002 13:45:19.385966       1 service.go:306] Service svc-latency-570/latency-svc-495qq updated: 1 ports\nI1002 13:45:19.386114       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-sbcv7\" at 100.67.244.50:80/TCP\nI1002 13:45:19.386139       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-b79x6\" at 100.65.83.123:80/TCP\nI1002 13:45:19.386199       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-8xfx8\" at 100.71.86.183:80/TCP\nI1002 13:45:19.386253       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-gskrc\" at 100.65.184.227:80/TCP\nI1002 13:45:19.386271       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-495qq\" at 100.64.100.1:80/TCP\nI1002 13:45:19.386418       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:19.396477       1 service.go:306] Service svc-latency-570/latency-svc-k4l9l updated: 1 ports\nI1002 13:45:19.414928       1 service.go:306] Service svc-latency-570/latency-svc-jkxwk updated: 1 ports\nI1002 13:45:19.428868       1 service.go:306] Service svc-latency-570/latency-svc-pm657 updated: 1 ports\nI1002 13:45:19.442988       1 service.go:306] Service svc-latency-570/latency-svc-v6mwc updated: 1 ports\nI1002 13:45:19.447622       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.554538ms\"\nI1002 13:45:19.454268       1 service.go:306] Service svc-latency-570/latency-svc-94k4z updated: 1 ports\nI1002 13:45:19.468926       1 service.go:306] Service svc-latency-570/latency-svc-bmrf5 updated: 1 ports\nI1002 13:45:19.496158       1 service.go:306] Service svc-latency-570/latency-svc-6knn6 updated: 1 ports\nI1002 13:45:19.503713       1 service.go:306] Service svc-latency-570/latency-svc-qh9p8 updated: 1 ports\nI1002 13:45:19.512990       1 service.go:306] Service svc-latency-570/latency-svc-8mjth updated: 1 ports\nI1002 13:45:19.540820       1 service.go:306] Service svc-latency-570/latency-svc-hvv2n updated: 1 ports\nI1002 13:45:19.552705       1 service.go:306] Service svc-latency-570/latency-svc-fc9rj updated: 1 ports\nI1002 13:45:19.568146       1 service.go:306] Service svc-latency-570/latency-svc-lmvfn updated: 1 ports\nI1002 13:45:19.575068       1 service.go:306] Service svc-latency-570/latency-svc-x5cxg updated: 1 ports\nI1002 13:45:19.630782       1 service.go:306] Service svc-latency-570/latency-svc-zj4z2 updated: 1 ports\nI1002 13:45:19.650937       1 service.go:306] Service svc-latency-570/latency-svc-th6cj updated: 1 ports\nI1002 13:45:19.673573       1 service.go:306] Service svc-latency-570/latency-svc-cfdlr updated: 1 ports\nI1002 13:45:19.677927       1 service.go:306] Service svc-latency-570/latency-svc-6j2th updated: 1 ports\nI1002 13:45:19.690504       1 service.go:306] Service svc-latency-570/latency-svc-rhdsw updated: 1 ports\nI1002 13:45:19.698604       1 service.go:306] Service svc-latency-570/latency-svc-x8wq6 updated: 1 ports\nI1002 13:45:19.705199       1 service.go:306] Service svc-latency-570/latency-svc-xxcl6 updated: 1 ports\nI1002 13:45:19.734629       1 service.go:306] Service svc-latency-570/latency-svc-c64hd updated: 1 ports\nI1002 13:45:19.747468       1 service.go:306] Service svc-latency-570/latency-svc-klp66 updated: 1 ports\nI1002 13:45:19.750049       1 service.go:306] Service svc-latency-570/latency-svc-llx24 updated: 1 ports\nI1002 13:45:19.794063       1 service.go:306] Service svc-latency-570/latency-svc-nfmlx updated: 1 ports\nI1002 13:45:19.804217       1 service.go:306] Service svc-latency-570/latency-svc-qnsbp updated: 1 ports\nI1002 13:45:19.811977       1 service.go:306] Service svc-latency-570/latency-svc-mtxbd updated: 1 ports\nI1002 13:45:19.826788       1 service.go:306] Service svc-latency-570/latency-svc-jdk2w updated: 1 ports\nI1002 13:45:19.833414       1 service.go:306] Service svc-latency-570/latency-svc-pkjfx updated: 1 ports\nI1002 13:45:19.846571       1 service.go:306] Service svc-latency-570/latency-svc-pnmw4 updated: 1 ports\nI1002 13:45:19.858142       1 service.go:306] Service svc-latency-570/latency-svc-jrjgf updated: 1 ports\nI1002 13:45:19.872366       1 service.go:306] Service svc-latency-570/latency-svc-5bkl4 updated: 1 ports\nI1002 13:45:19.884572       1 service.go:306] Service svc-latency-570/latency-svc-x6xkt updated: 1 ports\nI1002 13:45:19.895911       1 service.go:306] Service svc-latency-570/latency-svc-n4htl updated: 1 ports\nI1002 13:45:19.920321       1 service.go:306] Service svc-latency-570/latency-svc-g7hh2 updated: 1 ports\nI1002 13:45:19.940770       1 service.go:306] Service svc-latency-570/latency-svc-qsnp9 updated: 1 ports\nI1002 13:45:19.970759       1 service.go:306] Service svc-latency-570/latency-svc-7k7vb updated: 1 ports\nI1002 13:45:20.019220       1 service.go:306] Service svc-latency-570/latency-svc-6kdl7 updated: 1 ports\nI1002 13:45:20.070593       1 service.go:306] Service svc-latency-570/latency-svc-l5k5t updated: 1 ports\nI1002 13:45:20.105785       1 service.go:306] Service svc-latency-570/latency-svc-mj4xl updated: 1 ports\nI1002 13:45:20.130675       1 service.go:306] Service svc-latency-570/latency-svc-4vwvp updated: 1 ports\nI1002 13:45:20.155111       1 service.go:306] Service svc-latency-570/latency-svc-hxnck updated: 1 ports\nI1002 13:45:20.174997       1 service.go:306] Service svc-latency-570/latency-svc-d7fvh updated: 1 ports\nI1002 13:45:20.189848       1 service.go:306] Service svc-latency-570/latency-svc-mrktr updated: 1 ports\nI1002 13:45:20.215215       1 service.go:306] Service svc-latency-570/latency-svc-zhgk6 updated: 1 ports\nI1002 13:45:20.224326       1 service.go:306] Service svc-latency-570/latency-svc-9nmbt updated: 1 ports\nI1002 13:45:20.253689       1 service.go:306] Service svc-latency-570/latency-svc-p8h46 updated: 1 ports\nI1002 13:45:20.261249       1 service.go:306] Service svc-latency-570/latency-svc-lzbxd updated: 1 ports\nI1002 13:45:20.268656       1 service.go:306] Service svc-latency-570/latency-svc-9hrns updated: 1 ports\nI1002 13:45:20.285236       1 service.go:306] Service svc-latency-570/latency-svc-tgmwl updated: 1 ports\nI1002 13:45:20.298142       1 service.go:306] Service svc-latency-570/latency-svc-kpt22 updated: 1 ports\nI1002 13:45:20.306304       1 service.go:306] Service svc-latency-570/latency-svc-tt92w updated: 1 ports\nI1002 13:45:20.321854       1 service.go:306] Service svc-latency-570/latency-svc-8jzh9 updated: 1 ports\nI1002 13:45:20.321998       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-klp66\" at 100.71.114.182:80/TCP\nI1002 13:45:20.322020       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mtxbd\" at 100.68.184.26:80/TCP\nI1002 13:45:20.322035       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-6kdl7\" at 100.70.15.45:80/TCP\nI1002 13:45:20.322055       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-jkxwk\" at 100.69.217.146:80/TCP\nI1002 13:45:20.322066       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-zj4z2\" at 100.66.65.123:80/TCP\nI1002 13:45:20.322076       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-th6cj\" at 100.68.222.25:80/TCP\nI1002 13:45:20.322087       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-rhdsw\" at 100.65.209.135:80/TCP\nI1002 13:45:20.322101       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-c64hd\" at 100.66.55.33:80/TCP\nI1002 13:45:20.322111       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-p8h46\" at 100.66.43.94:80/TCP\nI1002 13:45:20.322122       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-pnmw4\" at 100.70.102.254:80/TCP\nI1002 13:45:20.322131       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-lmvfn\" at 100.65.97.183:80/TCP\nI1002 13:45:20.322140       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-cfdlr\" at 100.65.29.88:80/TCP\nI1002 13:45:20.322150       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-nfmlx\" at 100.69.238.156:80/TCP\nI1002 13:45:20.322160       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-qnsbp\" at 100.70.213.249:80/TCP\nI1002 13:45:20.322170       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-jdk2w\" at 100.66.124.199:80/TCP\nI1002 13:45:20.322180       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-5bkl4\" at 100.69.24.227:80/TCP\nI1002 13:45:20.322194       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-qsnp9\" at 100.66.218.69:80/TCP\nI1002 13:45:20.322205       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-7k7vb\" at 100.66.32.248:80/TCP\nI1002 13:45:20.322249       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-pm657\" at 100.71.45.255:80/TCP\nI1002 13:45:20.322260       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-x5cxg\" at 100.69.234.209:80/TCP\nI1002 13:45:20.322270       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-xxcl6\" at 100.69.71.154:80/TCP\nI1002 13:45:20.322281       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-pkjfx\" at 100.67.172.221:80/TCP\nI1002 13:45:20.322291       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-jrjgf\" at 100.70.72.170:80/TCP\nI1002 13:45:20.322301       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mj4xl\" at 100.64.33.72:80/TCP\nI1002 13:45:20.322311       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-d7fvh\" at 100.67.114.96:80/TCP\nI1002 13:45:20.322324       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-tt92w\" at 100.68.209.171:80/TCP\nI1002 13:45:20.322335       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-6knn6\" at 100.70.118.90:80/TCP\nI1002 13:45:20.322342       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-fc9rj\" at 100.67.64.255:80/TCP\nI1002 13:45:20.322349       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-6j2th\" at 100.71.34.84:80/TCP\nI1002 13:45:20.322357       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-g7hh2\" at 100.66.16.198:80/TCP\nI1002 13:45:20.322364       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-lzbxd\" at 100.70.177.249:80/TCP\nI1002 13:45:20.322370       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-bmrf5\" at 100.65.77.38:80/TCP\nI1002 13:45:20.322377       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-hvv2n\" at 100.65.223.234:80/TCP\nI1002 13:45:20.322384       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-zhgk6\" at 100.68.33.250:80/TCP\nI1002 13:45:20.322394       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-x6xkt\" at 100.69.180.39:80/TCP\nI1002 13:45:20.322405       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-l5k5t\" at 100.64.47.229:80/TCP\nI1002 13:45:20.322415       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-hxnck\" at 100.64.138.109:80/TCP\nI1002 13:45:20.322425       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-9nmbt\" at 100.66.113.80:80/TCP\nI1002 13:45:20.322436       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-kpt22\" at 100.66.122.120:80/TCP\nI1002 13:45:20.322443       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-9hrns\" at 100.66.143.193:80/TCP\nI1002 13:45:20.322457       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-tgmwl\" at 100.65.127.175:80/TCP\nI1002 13:45:20.322464       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-qh9p8\" at 100.69.152.110:80/TCP\nI1002 13:45:20.322479       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-8mjth\" at 100.70.188.110:80/TCP\nI1002 13:45:20.322492       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-llx24\" at 100.64.135.114:80/TCP\nI1002 13:45:20.322502       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-4vwvp\" at 100.70.106.64:80/TCP\nI1002 13:45:20.322510       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mrktr\" at 100.67.226.162:80/TCP\nI1002 13:45:20.322516       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-8jzh9\" at 100.70.155.17:80/TCP\nI1002 13:45:20.322523       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-k4l9l\" at 100.64.211.80:80/TCP\nI1002 13:45:20.322529       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-v6mwc\" at 100.67.33.63:80/TCP\nI1002 13:45:20.322536       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-94k4z\" at 100.65.208.188:80/TCP\nI1002 13:45:20.322542       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-x8wq6\" at 100.66.21.71:80/TCP\nI1002 13:45:20.322549       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-n4htl\" at 100.66.45.32:80/TCP\nI1002 13:45:20.323102       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:20.328112       1 service.go:306] Service svc-latency-570/latency-svc-2xq7d updated: 1 ports\nI1002 13:45:20.335949       1 service.go:306] Service svc-latency-570/latency-svc-5mzm8 updated: 1 ports\nI1002 13:45:20.348101       1 service.go:306] Service svc-latency-570/latency-svc-db9d8 updated: 1 ports\nI1002 13:45:20.362079       1 service.go:306] Service svc-latency-570/latency-svc-v5wg4 updated: 1 ports\nI1002 13:45:20.382790       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.881486ms\"\nI1002 13:45:20.398876       1 service.go:306] Service svc-latency-570/latency-svc-gcgsr updated: 1 ports\nI1002 13:45:20.407302       1 service.go:306] Service svc-latency-570/latency-svc-j978v updated: 1 ports\nI1002 13:45:20.433991       1 service.go:306] Service svc-latency-570/latency-svc-wcvbt updated: 1 ports\nI1002 13:45:20.449535       1 service.go:306] Service svc-latency-570/latency-svc-4pwd5 updated: 1 ports\nI1002 13:45:20.493516       1 service.go:306] Service svc-latency-570/latency-svc-5g8g5 updated: 1 ports\nI1002 13:45:20.543943       1 service.go:306] Service svc-latency-570/latency-svc-fjp5q updated: 1 ports\nI1002 13:45:20.595345       1 service.go:306] Service svc-latency-570/latency-svc-llsjt updated: 1 ports\nI1002 13:45:20.652029       1 service.go:306] Service svc-latency-570/latency-svc-lds2d updated: 1 ports\nI1002 13:45:20.702435       1 service.go:306] Service svc-latency-570/latency-svc-2gd9l updated: 1 ports\nI1002 13:45:20.740089       1 service.go:306] Service svc-latency-570/latency-svc-zm2wt updated: 1 ports\nI1002 13:45:20.785378       1 service.go:306] Service svc-latency-570/latency-svc-m2p64 updated: 1 ports\nI1002 13:45:20.850190       1 service.go:306] Service svc-latency-570/latency-svc-mv244 updated: 1 ports\nI1002 13:45:20.885224       1 service.go:306] Service svc-latency-570/latency-svc-c2gr8 updated: 1 ports\nI1002 13:45:20.939096       1 service.go:306] Service svc-latency-570/latency-svc-m4kvw updated: 1 ports\nI1002 13:45:20.991129       1 service.go:306] Service svc-latency-570/latency-svc-k8tcf updated: 1 ports\nI1002 13:45:21.042428       1 service.go:306] Service svc-latency-570/latency-svc-99pgk updated: 1 ports\nI1002 13:45:21.082586       1 service.go:306] Service svc-latency-570/latency-svc-8dtmg updated: 1 ports\nI1002 13:45:21.136407       1 service.go:306] Service svc-latency-570/latency-svc-jjl2p updated: 1 ports\nI1002 13:45:21.185043       1 service.go:306] Service svc-latency-570/latency-svc-qzxv9 updated: 1 ports\nI1002 13:45:21.233301       1 service.go:306] Service svc-latency-570/latency-svc-jjqb8 updated: 1 ports\nI1002 13:45:21.283135       1 service.go:306] Service svc-latency-570/latency-svc-xschq updated: 1 ports\nI1002 13:45:21.342462       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-j978v\" at 100.68.165.8:80/TCP\nI1002 13:45:21.342491       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-wcvbt\" at 100.64.6.44:80/TCP\nI1002 13:45:21.342504       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-fjp5q\" at 100.70.109.233:80/TCP\nI1002 13:45:21.342514       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-lds2d\" at 100.69.95.150:80/TCP\nI1002 13:45:21.342526       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-zm2wt\" at 100.68.126.194:80/TCP\nI1002 13:45:21.342537       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-m2p64\" at 100.64.80.55:80/TCP\nI1002 13:45:21.342549       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-jjl2p\" at 100.64.161.99:80/TCP\nI1002 13:45:21.342559       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-db9d8\" at 100.71.164.140:80/TCP\nI1002 13:45:21.342569       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-qzxv9\" at 100.69.241.94:80/TCP\nI1002 13:45:21.342580       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-v5wg4\" at 100.64.106.160:80/TCP\nI1002 13:45:21.342592       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-2gd9l\" at 100.64.213.213:80/TCP\nI1002 13:45:21.342607       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mv244\" at 100.71.116.155:80/TCP\nI1002 13:45:21.342616       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-m4kvw\" at 100.67.248.52:80/TCP\nI1002 13:45:21.342625       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-k8tcf\" at 100.70.230.146:80/TCP\nI1002 13:45:21.342635       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-jjqb8\" at 100.65.148.251:80/TCP\nI1002 13:45:21.342647       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-2xq7d\" at 100.65.185.187:80/TCP\nI1002 13:45:21.342661       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-8dtmg\" at 100.67.183.15:80/TCP\nI1002 13:45:21.342672       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-5mzm8\" at 100.71.240.8:80/TCP\nI1002 13:45:21.342687       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-4pwd5\" at 100.68.192.170:80/TCP\nI1002 13:45:21.342699       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-5g8g5\" at 100.70.96.164:80/TCP\nI1002 13:45:21.342711       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-llsjt\" at 100.65.34.149:80/TCP\nI1002 13:45:21.342721       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-c2gr8\" at 100.66.98.162:80/TCP\nI1002 13:45:21.342733       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-99pgk\" at 100.65.151.236:80/TCP\nI1002 13:45:21.342745       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-xschq\" at 100.68.234.30:80/TCP\nI1002 13:45:21.342756       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-gcgsr\" at 100.67.23.184:80/TCP\nI1002 13:45:21.343209       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:21.347863       1 service.go:306] Service svc-latency-570/latency-svc-jhhgt updated: 1 ports\nI1002 13:45:21.388746       1 service.go:306] Service svc-latency-570/latency-svc-n7gdm updated: 1 ports\nI1002 13:45:21.411770       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.319943ms\"\nI1002 13:45:21.483510       1 service.go:306] Service svc-latency-570/latency-svc-qrxlv updated: 1 ports\nI1002 13:45:21.533844       1 service.go:306] Service svc-latency-570/latency-svc-szmlp updated: 1 ports\nI1002 13:45:21.600451       1 service.go:306] Service svc-latency-570/latency-svc-h6sjp updated: 1 ports\nI1002 13:45:21.640023       1 service.go:306] Service svc-latency-570/latency-svc-f74g9 updated: 1 ports\nI1002 13:45:21.702049       1 service.go:306] Service svc-latency-570/latency-svc-svndg updated: 1 ports\nI1002 13:45:21.732214       1 service.go:306] Service svc-latency-570/latency-svc-r4vcx updated: 1 ports\nI1002 13:45:21.785342       1 service.go:306] Service svc-latency-570/latency-svc-7z828 updated: 1 ports\nI1002 13:45:21.806794       1 service.go:306] Service webhook-2356/e2e-test-webhook updated: 1 ports\nI1002 13:45:21.835451       1 service.go:306] Service svc-latency-570/latency-svc-wllns updated: 1 ports\nI1002 13:45:21.901549       1 service.go:306] Service svc-latency-570/latency-svc-55km4 updated: 1 ports\nI1002 13:45:21.938983       1 service.go:306] Service svc-latency-570/latency-svc-wvhqs updated: 1 ports\nI1002 13:45:21.985467       1 service.go:306] Service svc-latency-570/latency-svc-jx22p updated: 1 ports\nI1002 13:45:22.047617       1 service.go:306] Service svc-latency-570/latency-svc-4vpj5 updated: 1 ports\nI1002 13:45:22.099139       1 service.go:306] Service svc-latency-570/latency-svc-m5g8b updated: 1 ports\nI1002 13:45:22.133606       1 service.go:306] Service svc-latency-570/latency-svc-4jmn5 updated: 1 ports\nI1002 13:45:22.186334       1 service.go:306] Service svc-latency-570/latency-svc-7v854 updated: 1 ports\nI1002 13:45:22.253815       1 service.go:306] Service svc-latency-570/latency-svc-4cj6f updated: 1 ports\nI1002 13:45:22.285900       1 service.go:306] Service svc-latency-570/latency-svc-d7kpx updated: 1 ports\nI1002 13:45:22.336572       1 service.go:306] Service svc-latency-570/latency-svc-8xsxt updated: 1 ports\nI1002 13:45:22.336624       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-d7kpx\" at 100.65.83.196:80/TCP\nI1002 13:45:22.336640       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-8xsxt\" at 100.68.38.222:80/TCP\nI1002 13:45:22.336651       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-svndg\" at 100.71.142.114:80/TCP\nI1002 13:45:22.336662       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-wllns\" at 100.69.60.109:80/TCP\nI1002 13:45:22.336671       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-wvhqs\" at 100.69.19.99:80/TCP\nI1002 13:45:22.336681       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-m5g8b\" at 100.70.226.184:80/TCP\nI1002 13:45:22.336692       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-4cj6f\" at 100.65.121.124:80/TCP\nI1002 13:45:22.336701       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-55km4\" at 100.69.31.215:80/TCP\nI1002 13:45:22.336711       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-4vpj5\" at 100.68.7.103:80/TCP\nI1002 13:45:22.336721       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-4jmn5\" at 100.66.114.40:80/TCP\nI1002 13:45:22.336731       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-jhhgt\" at 100.64.14.90:80/TCP\nI1002 13:45:22.336743       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-n7gdm\" at 100.68.161.132:80/TCP\nI1002 13:45:22.336752       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-szmlp\" at 100.68.213.171:80/TCP\nI1002 13:45:22.336762       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-r4vcx\" at 100.69.11.151:80/TCP\nI1002 13:45:22.336771       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-7z828\" at 100.71.226.77:80/TCP\nI1002 13:45:22.336781       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-qrxlv\" at 100.68.214.224:80/TCP\nI1002 13:45:22.336791       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-h6sjp\" at 100.71.234.94:80/TCP\nI1002 13:45:22.336824       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-f74g9\" at 100.66.111.244:80/TCP\nI1002 13:45:22.336837       1 service.go:421] Adding new service port \"webhook-2356/e2e-test-webhook\" at 100.69.139.34:8443/TCP\nI1002 13:45:22.336853       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-jx22p\" at 100.65.84.149:80/TCP\nI1002 13:45:22.336864       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-7v854\" at 100.64.96.23:80/TCP\nI1002 13:45:22.337338       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:22.390620       1 service.go:306] Service svc-latency-570/latency-svc-sk4pz updated: 1 ports\nI1002 13:45:22.437509       1 service.go:306] Service svc-latency-570/latency-svc-wc575 updated: 1 ports\nI1002 13:45:22.485521       1 service.go:306] Service svc-latency-570/latency-svc-5zxxg updated: 1 ports\nI1002 13:45:22.539886       1 service.go:306] Service svc-latency-570/latency-svc-fmx56 updated: 1 ports\nI1002 13:45:22.636525       1 service.go:306] Service svc-latency-570/latency-svc-94lrf updated: 1 ports\nI1002 13:45:22.688931       1 service.go:306] Service svc-latency-570/latency-svc-llwph updated: 1 ports\nI1002 13:45:22.739755       1 service.go:306] Service svc-latency-570/latency-svc-5phfc updated: 1 ports\nI1002 13:45:22.806658       1 service.go:306] Service svc-latency-570/latency-svc-krngd updated: 1 ports\nI1002 13:45:22.828665       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"492.024256ms\"\nI1002 13:45:22.835847       1 service.go:306] Service svc-latency-570/latency-svc-cmp64 updated: 1 ports\nI1002 13:45:22.891564       1 service.go:306] Service svc-latency-570/latency-svc-l7mlq updated: 1 ports\nI1002 13:45:22.934595       1 service.go:306] Service svc-latency-570/latency-svc-8c97k updated: 1 ports\nI1002 13:45:23.001788       1 service.go:306] Service svc-latency-570/latency-svc-j8xcl updated: 1 ports\nI1002 13:45:23.071958       1 service.go:306] Service svc-latency-570/latency-svc-tgjk2 updated: 1 ports\nI1002 13:45:23.089499       1 service.go:306] Service svc-latency-570/latency-svc-b9fsh updated: 1 ports\nI1002 13:45:23.140301       1 service.go:306] Service svc-latency-570/latency-svc-75pjl updated: 1 ports\nI1002 13:45:23.212057       1 service.go:306] Service svc-latency-570/latency-svc-xcptf updated: 1 ports\nI1002 13:45:23.253340       1 service.go:306] Service svc-latency-570/latency-svc-9sdxl updated: 1 ports\nI1002 13:45:23.307911       1 service.go:306] Service svc-latency-570/latency-svc-vpmcv updated: 1 ports\nI1002 13:45:23.343086       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-sk4pz\" at 100.66.145.35:80/TCP\nI1002 13:45:23.343124       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-94lrf\" at 100.66.201.147:80/TCP\nI1002 13:45:23.343141       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-5phfc\" at 100.68.149.77:80/TCP\nI1002 13:45:23.343172       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-tgjk2\" at 100.71.138.116:80/TCP\nI1002 13:45:23.343190       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-vpmcv\" at 100.71.131.48:80/TCP\nI1002 13:45:23.343204       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-krngd\" at 100.67.184.117:80/TCP\nI1002 13:45:23.343214       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-cmp64\" at 100.64.70.151:80/TCP\nI1002 13:45:23.343240       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-9sdxl\" at 100.71.253.123:80/TCP\nI1002 13:45:23.343251       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-wc575\" at 100.71.133.69:80/TCP\nI1002 13:45:23.343261       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-fmx56\" at 100.64.254.26:80/TCP\nI1002 13:45:23.343272       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-llwph\" at 100.66.110.59:80/TCP\nI1002 13:45:23.343282       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-j8xcl\" at 100.68.173.98:80/TCP\nI1002 13:45:23.343296       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-b9fsh\" at 100.69.110.47:80/TCP\nI1002 13:45:23.343325       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-xcptf\" at 100.65.45.6:80/TCP\nI1002 13:45:23.343335       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-5zxxg\" at 100.65.164.105:80/TCP\nI1002 13:45:23.343344       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-l7mlq\" at 100.69.173.59:80/TCP\nI1002 13:45:23.343354       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-8c97k\" at 100.67.153.223:80/TCP\nI1002 13:45:23.343365       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-75pjl\" at 100.64.191.167:80/TCP\nI1002 13:45:23.344132       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:23.347749       1 service.go:306] Service svc-latency-570/latency-svc-8t59k updated: 1 ports\nI1002 13:45:23.411276       1 service.go:306] Service svc-latency-570/latency-svc-qtn9g updated: 1 ports\nI1002 13:45:23.448174       1 service.go:306] Service svc-latency-570/latency-svc-bzlq4 updated: 1 ports\nI1002 13:45:23.470114       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"127.134176ms\"\nI1002 13:45:23.557886       1 service.go:306] Service svc-latency-570/latency-svc-mnq5s updated: 1 ports\nI1002 13:45:23.626648       1 service.go:306] Service svc-latency-570/latency-svc-kxldt updated: 1 ports\nI1002 13:45:23.648460       1 service.go:306] Service svc-latency-570/latency-svc-vft9p updated: 1 ports\nI1002 13:45:23.700347       1 service.go:306] Service svc-latency-570/latency-svc-z2fbd updated: 1 ports\nI1002 13:45:23.749011       1 service.go:306] Service svc-latency-570/latency-svc-2f6sj updated: 1 ports\nI1002 13:45:23.791654       1 service.go:306] Service svc-latency-570/latency-svc-mjhjr updated: 1 ports\nI1002 13:45:23.847280       1 service.go:306] Service svc-latency-570/latency-svc-gzppw updated: 1 ports\nI1002 13:45:23.892972       1 service.go:306] Service svc-latency-570/latency-svc-rnthr updated: 1 ports\nI1002 13:45:23.936632       1 service.go:306] Service svc-latency-570/latency-svc-gtsbh updated: 1 ports\nI1002 13:45:23.986903       1 service.go:306] Service svc-latency-570/latency-svc-c2p85 updated: 1 ports\nI1002 13:45:24.047741       1 service.go:306] Service svc-latency-570/latency-svc-wxprb updated: 1 ports\nI1002 13:45:24.090343       1 service.go:306] Service svc-latency-570/latency-svc-wjrs2 updated: 1 ports\nI1002 13:45:24.146866       1 service.go:306] Service svc-latency-570/latency-svc-484vp updated: 1 ports\nI1002 13:45:24.280902       1 service.go:306] Service svc-latency-570/latency-svc-lv5h6 updated: 1 ports\nI1002 13:45:24.319969       1 service.go:306] Service svc-latency-570/latency-svc-8bk8l updated: 1 ports\nI1002 13:45:24.347313       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mjhjr\" at 100.67.167.231:80/TCP\nI1002 13:45:24.347344       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-gtsbh\" at 100.65.174.147:80/TCP\nI1002 13:45:24.347358       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-wxprb\" at 100.71.105.32:80/TCP\nI1002 13:45:24.347370       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-bzlq4\" at 100.64.234.26:80/TCP\nI1002 13:45:24.347379       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-kxldt\" at 100.65.34.128:80/TCP\nI1002 13:45:24.347392       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-z2fbd\" at 100.65.35.73:80/TCP\nI1002 13:45:24.347404       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-c2p85\" at 100.68.130.166:80/TCP\nI1002 13:45:24.347470       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-wjrs2\" at 100.67.115.62:80/TCP\nI1002 13:45:24.347499       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-8bk8l\" at 100.64.17.0:80/TCP\nI1002 13:45:24.347570       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-8t59k\" at 100.65.5.44:80/TCP\nI1002 13:45:24.347636       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-vft9p\" at 100.65.59.137:80/TCP\nI1002 13:45:24.347686       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-gzppw\" at 100.66.204.33:80/TCP\nI1002 13:45:24.347703       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-484vp\" at 100.71.27.49:80/TCP\nI1002 13:45:24.347768       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mnq5s\" at 100.65.163.215:80/TCP\nI1002 13:45:24.347785       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-2f6sj\" at 100.70.72.16:80/TCP\nI1002 13:45:24.347797       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-lv5h6\" at 100.65.197.20:80/TCP\nI1002 13:45:24.347837       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-qtn9g\" at 100.66.166.178:80/TCP\nI1002 13:45:24.347853       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-rnthr\" at 100.71.96.115:80/TCP\nI1002 13:45:24.348457       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:24.352114       1 service.go:306] Service svc-latency-570/latency-svc-f95ff updated: 1 ports\nI1002 13:45:24.362347       1 service.go:306] Service svc-latency-570/latency-svc-bkqcv updated: 1 ports\nI1002 13:45:24.428330       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"81.014503ms\"\nI1002 13:45:24.503380       1 service.go:306] Service svc-latency-570/latency-svc-9gk96 updated: 1 ports\nI1002 13:45:24.516951       1 service.go:306] Service svc-latency-570/latency-svc-66279 updated: 1 ports\nI1002 13:45:24.591609       1 service.go:306] Service svc-latency-570/latency-svc-7znp9 updated: 1 ports\nI1002 13:45:24.608830       1 service.go:306] Service svc-latency-570/latency-svc-58hlm updated: 1 ports\nI1002 13:45:24.632183       1 service.go:306] Service svc-latency-570/latency-svc-4tl26 updated: 1 ports\nI1002 13:45:24.659890       1 service.go:306] Service svc-latency-570/latency-svc-l7fk2 updated: 1 ports\nI1002 13:45:24.724529       1 service.go:306] Service svc-latency-570/latency-svc-5zmrt updated: 1 ports\nI1002 13:45:24.777501       1 service.go:306] Service svc-latency-570/latency-svc-c2jcm updated: 1 ports\nI1002 13:45:24.805500       1 service.go:306] Service svc-latency-570/latency-svc-brfsx updated: 1 ports\nI1002 13:45:24.846747       1 service.go:306] Service svc-latency-570/latency-svc-vl98x updated: 1 ports\nI1002 13:45:24.918079       1 service.go:306] Service svc-latency-570/latency-svc-2c75f updated: 1 ports\nI1002 13:45:24.938737       1 service.go:306] Service svc-latency-570/latency-svc-pr2fw updated: 1 ports\nI1002 13:45:24.994462       1 service.go:306] Service svc-latency-570/latency-svc-ln4bn updated: 1 ports\nI1002 13:45:25.034578       1 service.go:306] Service svc-latency-570/latency-svc-q6t7g updated: 1 ports\nI1002 13:45:25.087615       1 service.go:306] Service svc-latency-570/latency-svc-lsvw8 updated: 1 ports\nI1002 13:45:25.168708       1 service.go:306] Service webhook-2356/e2e-test-webhook updated: 0 ports\nI1002 13:45:25.233605       1 service.go:306] Service svc-latency-570/latency-svc-sbtb2 updated: 1 ports\nI1002 13:45:25.245431       1 service.go:306] Service svc-latency-570/latency-svc-7f6td updated: 1 ports\nI1002 13:45:25.293132       1 service.go:306] Service svc-latency-570/latency-svc-mkm42 updated: 1 ports\nI1002 13:45:25.341887       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mkm42\" at 100.70.25.218:80/TCP\nI1002 13:45:25.341914       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-58hlm\" at 100.64.67.163:80/TCP\nI1002 13:45:25.341926       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-5zmrt\" at 100.65.75.226:80/TCP\nI1002 13:45:25.341935       1 service.go:446] Removing service port \"webhook-2356/e2e-test-webhook\"\nI1002 13:45:25.341946       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-l7fk2\" at 100.68.56.110:80/TCP\nI1002 13:45:25.341958       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-c2jcm\" at 100.69.192.140:80/TCP\nI1002 13:45:25.341974       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-pr2fw\" at 100.64.136.121:80/TCP\nI1002 13:45:25.341984       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-f95ff\" at 100.69.157.26:80/TCP\nI1002 13:45:25.341994       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-66279\" at 100.66.145.240:80/TCP\nI1002 13:45:25.342006       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-4tl26\" at 100.66.124.138:80/TCP\nI1002 13:45:25.342018       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-2c75f\" at 100.64.158.186:80/TCP\nI1002 13:45:25.342027       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-q6t7g\" at 100.64.155.175:80/TCP\nI1002 13:45:25.342037       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-7f6td\" at 100.70.183.163:80/TCP\nI1002 13:45:25.342047       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-bkqcv\" at 100.65.208.42:80/TCP\nI1002 13:45:25.342056       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-9gk96\" at 100.68.242.192:80/TCP\nI1002 13:45:25.342067       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-vl98x\" at 100.71.242.226:80/TCP\nI1002 13:45:25.342077       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-lsvw8\" at 100.66.208.108:80/TCP\nI1002 13:45:25.342089       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-sbtb2\" at 100.68.166.11:80/TCP\nI1002 13:45:25.342099       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-7znp9\" at 100.65.254.156:80/TCP\nI1002 13:45:25.342109       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-brfsx\" at 100.67.69.83:80/TCP\nI1002 13:45:25.342119       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-ln4bn\" at 100.71.80.143:80/TCP\nI1002 13:45:25.342745       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:25.351638       1 service.go:306] Service svc-latency-570/latency-svc-zj6ds updated: 1 ports\nI1002 13:45:25.441092       1 service.go:306] Service svc-latency-570/latency-svc-d2bhc updated: 1 ports\nI1002 13:45:25.445035       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"103.152377ms\"\nI1002 13:45:25.447643       1 service.go:306] Service svc-latency-570/latency-svc-vll92 updated: 1 ports\nI1002 13:45:25.485523       1 service.go:306] Service svc-latency-570/latency-svc-j9dqt updated: 1 ports\nI1002 13:45:25.548163       1 service.go:306] Service svc-latency-570/latency-svc-2pdhv updated: 1 ports\nI1002 13:45:25.620831       1 service.go:306] Service svc-latency-570/latency-svc-2pq42 updated: 1 ports\nI1002 13:45:25.639807       1 service.go:306] Service svc-latency-570/latency-svc-4qcnh updated: 1 ports\nI1002 13:45:25.686169       1 service.go:306] Service svc-latency-570/latency-svc-ckpdn updated: 1 ports\nI1002 13:45:25.737631       1 service.go:306] Service svc-latency-570/latency-svc-nh5g8 updated: 1 ports\nI1002 13:45:25.805821       1 service.go:306] Service svc-latency-570/latency-svc-c2svn updated: 1 ports\nI1002 13:45:25.834167       1 service.go:306] Service svc-latency-570/latency-svc-zbk5b updated: 1 ports\nI1002 13:45:25.897432       1 service.go:306] Service svc-latency-570/latency-svc-mlnrr updated: 1 ports\nI1002 13:45:25.995938       1 service.go:306] Service svc-latency-570/latency-svc-67hk8 updated: 1 ports\nI1002 13:45:26.031625       1 service.go:306] Service svc-latency-570/latency-svc-t9wbz updated: 1 ports\nI1002 13:45:26.102982       1 service.go:306] Service svc-latency-570/latency-svc-kcplg updated: 1 ports\nI1002 13:45:26.137520       1 service.go:306] Service svc-latency-570/latency-svc-mczjv updated: 1 ports\nI1002 13:45:26.184970       1 service.go:306] Service svc-latency-570/latency-svc-clc9z updated: 1 ports\nI1002 13:45:26.234497       1 service.go:306] Service svc-latency-570/latency-svc-55qtz updated: 1 ports\nI1002 13:45:26.289150       1 service.go:306] Service svc-latency-570/latency-svc-gjpn7 updated: 1 ports\nI1002 13:45:26.339493       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-ckpdn\" at 100.69.34.247:80/TCP\nI1002 13:45:26.339519       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-kcplg\" at 100.71.69.120:80/TCP\nI1002 13:45:26.339528       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-55qtz\" at 100.68.96.117:80/TCP\nI1002 13:45:26.339536       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-d2bhc\" at 100.71.9.82:80/TCP\nI1002 13:45:26.339543       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-j9dqt\" at 100.68.2.36:80/TCP\nI1002 13:45:26.339550       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-2pdhv\" at 100.66.207.102:80/TCP\nI1002 13:45:26.339557       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-nh5g8\" at 100.71.181.75:80/TCP\nI1002 13:45:26.339563       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-c2svn\" at 100.65.54.120:80/TCP\nI1002 13:45:26.339570       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-67hk8\" at 100.70.19.77:80/TCP\nI1002 13:45:26.339576       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-clc9z\" at 100.69.40.4:80/TCP\nI1002 13:45:26.339585       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-zj6ds\" at 100.71.1.181:80/TCP\nI1002 13:45:26.339595       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-vll92\" at 100.69.102.76:80/TCP\nI1002 13:45:26.339607       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-2pq42\" at 100.65.129.190:80/TCP\nI1002 13:45:26.339614       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-zbk5b\" at 100.70.64.0:80/TCP\nI1002 13:45:26.339621       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mlnrr\" at 100.71.31.48:80/TCP\nI1002 13:45:26.339628       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-t9wbz\" at 100.64.71.42:80/TCP\nI1002 13:45:26.339634       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-4qcnh\" at 100.65.251.159:80/TCP\nI1002 13:45:26.339641       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-mczjv\" at 100.65.73.239:80/TCP\nI1002 13:45:26.339647       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-gjpn7\" at 100.64.57.32:80/TCP\nI1002 13:45:26.340297       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:26.344814       1 service.go:306] Service svc-latency-570/latency-svc-b7mxn updated: 1 ports\nI1002 13:45:26.386840       1 service.go:306] Service svc-latency-570/latency-svc-r4bvz updated: 1 ports\nI1002 13:45:26.416874       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.380818ms\"\nI1002 13:45:26.433594       1 service.go:306] Service svc-latency-570/latency-svc-g5vsq updated: 1 ports\nI1002 13:45:26.497245       1 service.go:306] Service svc-latency-570/latency-svc-9tcrg updated: 1 ports\nI1002 13:45:26.539187       1 service.go:306] Service svc-latency-570/latency-svc-jdjxc updated: 1 ports\nI1002 13:45:26.591402       1 service.go:306] Service svc-latency-570/latency-svc-76r9f updated: 1 ports\nI1002 13:45:26.640730       1 service.go:306] Service svc-latency-570/latency-svc-k77rz updated: 1 ports\nI1002 13:45:26.684337       1 service.go:306] Service svc-latency-570/latency-svc-d7gj9 updated: 1 ports\nI1002 13:45:26.735943       1 service.go:306] Service svc-latency-570/latency-svc-k5hg8 updated: 1 ports\nI1002 13:45:26.814540       1 service.go:306] Service svc-latency-570/latency-svc-pw9hd updated: 1 ports\nI1002 13:45:26.835099       1 service.go:306] Service svc-latency-570/latency-svc-7xbfk updated: 1 ports\nI1002 13:45:26.883952       1 service.go:306] Service svc-latency-570/latency-svc-4tmlt updated: 1 ports\nI1002 13:45:26.938890       1 service.go:306] Service svc-latency-570/latency-svc-wd64f updated: 1 ports\nI1002 13:45:27.007677       1 service.go:306] Service svc-latency-570/latency-svc-z4b5t updated: 1 ports\nI1002 13:45:27.033691       1 service.go:306] Service svc-latency-570/latency-svc-dt245 updated: 1 ports\nI1002 13:45:27.085286       1 service.go:306] Service svc-latency-570/latency-svc-87qj7 updated: 1 ports\nI1002 13:45:27.135313       1 service.go:306] Service svc-latency-570/latency-svc-kqqgb updated: 1 ports\nI1002 13:45:27.182078       1 service.go:306] Service svc-latency-570/latency-svc-x79xw updated: 1 ports\nI1002 13:45:27.239755       1 service.go:306] Service svc-latency-570/latency-svc-chlmk updated: 1 ports\nI1002 13:45:27.285253       1 service.go:306] Service svc-latency-570/latency-svc-9vpsf updated: 1 ports\nI1002 13:45:27.340285       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-dt245\" at 100.65.20.191:80/TCP\nI1002 13:45:27.340343       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-g5vsq\" at 100.64.18.250:80/TCP\nI1002 13:45:27.340354       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-k77rz\" at 100.67.22.29:80/TCP\nI1002 13:45:27.340363       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-k5hg8\" at 100.70.193.179:80/TCP\nI1002 13:45:27.340373       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-7xbfk\" at 100.67.132.41:80/TCP\nI1002 13:45:27.340382       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-4tmlt\" at 100.67.42.245:80/TCP\nI1002 13:45:27.340391       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-wd64f\" at 100.66.222.1:80/TCP\nI1002 13:45:27.340400       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-9tcrg\" at 100.64.163.124:80/TCP\nI1002 13:45:27.340409       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-jdjxc\" at 100.69.227.223:80/TCP\nI1002 13:45:27.340419       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-76r9f\" at 100.70.164.105:80/TCP\nI1002 13:45:27.340429       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-pw9hd\" at 100.69.253.52:80/TCP\nI1002 13:45:27.340438       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-87qj7\" at 100.66.48.72:80/TCP\nI1002 13:45:27.340447       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-b7mxn\" at 100.65.124.182:80/TCP\nI1002 13:45:27.340463       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-d7gj9\" at 100.66.29.160:80/TCP\nI1002 13:45:27.340473       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-z4b5t\" at 100.69.114.58:80/TCP\nI1002 13:45:27.340483       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-9vpsf\" at 100.67.55.169:80/TCP\nI1002 13:45:27.340494       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-r4bvz\" at 100.65.75.225:80/TCP\nI1002 13:45:27.340507       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-kqqgb\" at 100.67.166.7:80/TCP\nI1002 13:45:27.340529       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-x79xw\" at 100.67.49.21:80/TCP\nI1002 13:45:27.340542       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-chlmk\" at 100.68.52.162:80/TCP\nI1002 13:45:27.341251       1 service.go:306] Service svc-latency-570/latency-svc-dtgmt updated: 1 ports\nI1002 13:45:27.341377       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:27.386007       1 service.go:306] Service svc-latency-570/latency-svc-l6ww2 updated: 1 ports\nI1002 13:45:27.441555       1 service.go:306] Service svc-latency-570/latency-svc-hlmlj updated: 1 ports\nI1002 13:45:27.448653       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"108.371116ms\"\nI1002 13:45:28.315970       1 service.go:306] Service volume-expand-832-7777/csi-hostpathplugin updated: 0 ports\nI1002 13:45:28.352626       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-dtgmt\" at 100.69.90.154:80/TCP\nI1002 13:45:28.352657       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-l6ww2\" at 100.66.4.209:80/TCP\nI1002 13:45:28.352668       1 service.go:421] Adding new service port \"svc-latency-570/latency-svc-hlmlj\" at 100.69.204.139:80/TCP\nI1002 13:45:28.352678       1 service.go:446] Removing service port \"volume-expand-832-7777/csi-hostpathplugin:dummy\"\nI1002 13:45:28.353249       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:28.436506       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.887003ms\"\nI1002 13:45:29.437747       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:29.596265       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"159.003368ms\"\nI1002 13:45:30.554762       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:30.734576       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"180.433865ms\"\nI1002 13:45:31.406821       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:31.533285       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"127.0712ms\"\nI1002 13:45:32.783329       1 service.go:306] Service pods-5225/fooservice updated: 1 ports\nI1002 13:45:32.783376       1 service.go:421] Adding new service port \"pods-5225/fooservice\" at 100.67.83.170:8765/TCP\nI1002 13:45:32.783936       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:32.855055       1 service.go:306] Service services-3680/service-headless-toggled updated: 1 ports\nI1002 13:45:32.896860       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"113.476209ms\"\nI1002 13:45:33.427937       1 service.go:421] Adding new service port \"services-3680/service-headless-toggled\" at 100.71.79.196:80/TCP\nI1002 13:45:33.428634       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:33.559060       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"131.133544ms\"\nI1002 13:45:34.332529       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:34.426499       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"94.925415ms\"\nI1002 13:45:35.323307       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:35.400219       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.865752ms\"\nI1002 13:45:35.911632       1 service.go:306] Service svc-latency-570/latency-svc-2c75f updated: 0 ports\nI1002 13:45:35.921086       1 service.go:306] Service svc-latency-570/latency-svc-2f6sj updated: 0 ports\nI1002 13:45:35.940377       1 service.go:306] Service svc-latency-570/latency-svc-2gd9l updated: 0 ports\nI1002 13:45:35.947067       1 service.go:306] Service svc-latency-570/latency-svc-2j2s9 updated: 0 ports\nI1002 13:45:35.956484       1 service.go:306] Service svc-latency-570/latency-svc-2pdhv updated: 0 ports\nI1002 13:45:35.965278       1 service.go:306] Service svc-latency-570/latency-svc-2pq42 updated: 0 ports\nI1002 13:45:35.976658       1 service.go:306] Service svc-latency-570/latency-svc-2xq7d updated: 0 ports\nI1002 13:45:35.985438       1 service.go:306] Service svc-latency-570/latency-svc-484vp updated: 0 ports\nI1002 13:45:35.998339       1 service.go:306] Service svc-latency-570/latency-svc-495qq updated: 0 ports\nI1002 13:45:36.011196       1 service.go:306] Service svc-latency-570/latency-svc-4cj6f updated: 0 ports\nI1002 13:45:36.016730       1 service.go:306] Service svc-latency-570/latency-svc-4jmn5 updated: 0 ports\nI1002 13:45:36.027463       1 service.go:306] Service svc-latency-570/latency-svc-4pwd5 updated: 0 ports\nI1002 13:45:36.041676       1 service.go:306] Service svc-latency-570/latency-svc-4qcnh updated: 0 ports\nI1002 13:45:36.057059       1 service.go:306] Service svc-latency-570/latency-svc-4tl26 updated: 0 ports\nI1002 13:45:36.070528       1 service.go:306] Service svc-latency-570/latency-svc-4tmlt updated: 0 ports\nI1002 13:45:36.087236       1 service.go:306] Service svc-latency-570/latency-svc-4vpj5 updated: 0 ports\nI1002 13:45:36.096574       1 service.go:306] Service svc-latency-570/latency-svc-4vwvp updated: 0 ports\nI1002 13:45:36.104565       1 service.go:306] Service svc-latency-570/latency-svc-55km4 updated: 0 ports\nI1002 13:45:36.121623       1 service.go:306] Service svc-latency-570/latency-svc-55qtz updated: 0 ports\nI1002 13:45:36.130360       1 service.go:306] Service svc-latency-570/latency-svc-58hlm updated: 0 ports\nI1002 13:45:36.141015       1 service.go:306] Service svc-latency-570/latency-svc-5bkl4 updated: 0 ports\nI1002 13:45:36.153225       1 service.go:306] Service svc-latency-570/latency-svc-5g8g5 updated: 0 ports\nI1002 13:45:36.204777       1 service.go:306] Service svc-latency-570/latency-svc-5mzm8 updated: 0 ports\nI1002 13:45:36.233864       1 service.go:306] Service svc-latency-570/latency-svc-5phfc updated: 0 ports\nI1002 13:45:36.298598       1 service.go:306] Service svc-latency-570/latency-svc-5zmrt updated: 0 ports\nI1002 13:45:36.376140       1 service.go:306] Service svc-latency-570/latency-svc-5zxxg updated: 0 ports\nI1002 13:45:36.376192       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-5bkl4\"\nI1002 13:45:36.376226       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-4qcnh\"\nI1002 13:45:36.376236       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-4tmlt\"\nI1002 13:45:36.376247       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-55km4\"\nI1002 13:45:36.376255       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-55qtz\"\nI1002 13:45:36.376262       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-2xq7d\"\nI1002 13:45:36.376270       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-4vpj5\"\nI1002 13:45:36.376291       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-5phfc\"\nI1002 13:45:36.376299       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-2c75f\"\nI1002 13:45:36.376306       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-2f6sj\"\nI1002 13:45:36.376314       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-2gd9l\"\nI1002 13:45:36.376324       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-2j2s9\"\nI1002 13:45:36.376332       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-4jmn5\"\nI1002 13:45:36.376342       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-4tl26\"\nI1002 13:45:36.376366       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-4vwvp\"\nI1002 13:45:36.376374       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-58hlm\"\nI1002 13:45:36.376381       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-2pdhv\"\nI1002 13:45:36.376388       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-2pq42\"\nI1002 13:45:36.376395       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-484vp\"\nI1002 13:45:36.376402       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-495qq\"\nI1002 13:45:36.376410       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-5zmrt\"\nI1002 13:45:36.376416       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-5zxxg\"\nI1002 13:45:36.376424       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-4cj6f\"\nI1002 13:45:36.376457       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-4pwd5\"\nI1002 13:45:36.376465       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-5g8g5\"\nI1002 13:45:36.376474       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-5mzm8\"\nI1002 13:45:36.377401       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:36.389499       1 service.go:306] Service svc-latency-570/latency-svc-66279 updated: 0 ports\nI1002 13:45:36.397969       1 service.go:306] Service svc-latency-570/latency-svc-67hk8 updated: 0 ports\nI1002 13:45:36.414633       1 service.go:306] Service svc-latency-570/latency-svc-6j2th updated: 0 ports\nI1002 13:45:36.423009       1 service.go:306] Service svc-latency-570/latency-svc-6kdl7 updated: 0 ports\nI1002 13:45:36.433735       1 service.go:306] Service svc-latency-570/latency-svc-6knn6 updated: 0 ports\nI1002 13:45:36.441691       1 service.go:306] Service svc-latency-570/latency-svc-75pjl updated: 0 ports\nI1002 13:45:36.453492       1 service.go:306] Service svc-latency-570/latency-svc-76r9f updated: 0 ports\nI1002 13:45:36.471962       1 service.go:306] Service svc-latency-570/latency-svc-7f6td updated: 0 ports\nI1002 13:45:36.482605       1 service.go:306] Service svc-latency-570/latency-svc-7k7vb updated: 0 ports\nI1002 13:45:36.485912       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"109.706736ms\"\nI1002 13:45:36.491178       1 service.go:306] Service svc-latency-570/latency-svc-7v854 updated: 0 ports\nI1002 13:45:36.504827       1 service.go:306] Service svc-latency-570/latency-svc-7xbfk updated: 0 ports\nI1002 13:45:36.524001       1 service.go:306] Service svc-latency-570/latency-svc-7z828 updated: 0 ports\nI1002 13:45:36.538462       1 service.go:306] Service svc-latency-570/latency-svc-7znp9 updated: 0 ports\nI1002 13:45:36.552540       1 service.go:306] Service svc-latency-570/latency-svc-87qj7 updated: 0 ports\nI1002 13:45:36.569828       1 service.go:306] Service svc-latency-570/latency-svc-8bk8l updated: 0 ports\nI1002 13:45:36.580157       1 service.go:306] Service svc-latency-570/latency-svc-8c97k updated: 0 ports\nI1002 13:45:36.589493       1 service.go:306] Service svc-latency-570/latency-svc-8dtmg updated: 0 ports\nI1002 13:45:36.600384       1 service.go:306] Service svc-latency-570/latency-svc-8jzh9 updated: 0 ports\nI1002 13:45:36.615532       1 service.go:306] Service svc-latency-570/latency-svc-8mjth updated: 0 ports\nI1002 13:45:36.674373       1 service.go:306] Service svc-latency-570/latency-svc-8t59k updated: 0 ports\nI1002 13:45:36.768733       1 service.go:306] Service svc-latency-570/latency-svc-8xfx8 updated: 0 ports\nI1002 13:45:36.889503       1 service.go:306] Service svc-latency-570/latency-svc-8xsxt updated: 0 ports\nI1002 13:45:36.928719       1 service.go:306] Service svc-latency-570/latency-svc-94k4z updated: 0 ports\nI1002 13:45:36.942637       1 service.go:306] Service svc-latency-570/latency-svc-94lrf updated: 0 ports\nI1002 13:45:36.971374       1 service.go:306] Service svc-latency-570/latency-svc-99pgk updated: 0 ports\nI1002 13:45:36.998276       1 service.go:306] Service svc-latency-570/latency-svc-9gk96 updated: 0 ports\nI1002 13:45:37.017113       1 service.go:306] Service svc-latency-570/latency-svc-9hrns updated: 0 ports\nI1002 13:45:37.028523       1 service.go:306] Service svc-latency-570/latency-svc-9nmbt updated: 0 ports\nI1002 13:45:37.039318       1 service.go:306] Service svc-latency-570/latency-svc-9sdxl updated: 0 ports\nI1002 13:45:37.050431       1 service.go:306] Service svc-latency-570/latency-svc-9tcrg updated: 0 ports\nI1002 13:45:37.090819       1 service.go:306] Service svc-latency-570/latency-svc-9vpsf updated: 0 ports\nI1002 13:45:37.112548       1 service.go:306] Service svc-latency-570/latency-svc-b79x6 updated: 0 ports\nI1002 13:45:37.136081       1 service.go:306] Service svc-latency-570/latency-svc-b7mxn updated: 0 ports\nI1002 13:45:37.156831       1 service.go:306] Service svc-latency-570/latency-svc-b9fsh updated: 0 ports\nI1002 13:45:37.224193       1 service.go:306] Service svc-latency-570/latency-svc-bkqcv updated: 0 ports\nI1002 13:45:37.309633       1 service.go:306] Service svc-latency-570/latency-svc-bmrf5 updated: 0 ports\nI1002 13:45:37.356175       1 service.go:306] Service svc-latency-570/latency-svc-brfsx updated: 0 ports\nI1002 13:45:37.356220       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-7znp9\"\nI1002 13:45:37.356236       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-8c97k\"\nI1002 13:45:37.356245       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-7z828\"\nI1002 13:45:37.356254       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-8xsxt\"\nI1002 13:45:37.356263       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-b7mxn\"\nI1002 13:45:37.356273       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-9tcrg\"\nI1002 13:45:37.356283       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-bkqcv\"\nI1002 13:45:37.356294       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-bmrf5\"\nI1002 13:45:37.356303       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-b79x6\"\nI1002 13:45:37.356373       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-67hk8\"\nI1002 13:45:37.356381       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-6j2th\"\nI1002 13:45:37.356389       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-7v854\"\nI1002 13:45:37.356398       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-94k4z\"\nI1002 13:45:37.356407       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-94lrf\"\nI1002 13:45:37.356416       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-99pgk\"\nI1002 13:45:37.356425       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-9nmbt\"\nI1002 13:45:37.356434       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-66279\"\nI1002 13:45:37.356442       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-6knn6\"\nI1002 13:45:37.356451       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-7f6td\"\nI1002 13:45:37.356459       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-87qj7\"\nI1002 13:45:37.356467       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-8dtmg\"\nI1002 13:45:37.356475       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-8t59k\"\nI1002 13:45:37.356483       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-6kdl7\"\nI1002 13:45:37.356494       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-8bk8l\"\nI1002 13:45:37.356502       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-8jzh9\"\nI1002 13:45:37.356511       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-b9fsh\"\nI1002 13:45:37.356519       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-brfsx\"\nI1002 13:45:37.356530       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-76r9f\"\nI1002 13:45:37.356538       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-9gk96\"\nI1002 13:45:37.356546       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-9vpsf\"\nI1002 13:45:37.356554       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-75pjl\"\nI1002 13:45:37.356561       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-7k7vb\"\nI1002 13:45:37.356571       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-7xbfk\"\nI1002 13:45:37.356579       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-8mjth\"\nI1002 13:45:37.356618       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-8xfx8\"\nI1002 13:45:37.356626       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-9hrns\"\nI1002 13:45:37.356634       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-9sdxl\"\nI1002 13:45:37.356793       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:37.385722       1 service.go:306] Service svc-latency-570/latency-svc-bzlq4 updated: 0 ports\nI1002 13:45:37.426163       1 service.go:306] Service svc-latency-570/latency-svc-c2gr8 updated: 0 ports\nI1002 13:45:37.428512       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.281676ms\"\nI1002 13:45:37.450008       1 service.go:306] Service svc-latency-570/latency-svc-c2jcm updated: 0 ports\nI1002 13:45:37.484537       1 service.go:306] Service svc-latency-570/latency-svc-c2p85 updated: 0 ports\nI1002 13:45:37.522674       1 service.go:306] Service svc-latency-570/latency-svc-c2svn updated: 0 ports\nI1002 13:45:37.565840       1 service.go:306] Service svc-latency-570/latency-svc-c64hd updated: 0 ports\nI1002 13:45:37.607608       1 service.go:306] Service svc-latency-570/latency-svc-cfdlr updated: 0 ports\nI1002 13:45:37.648521       1 service.go:306] Service svc-latency-570/latency-svc-chlmk updated: 0 ports\nI1002 13:45:37.705498       1 service.go:306] Service svc-latency-570/latency-svc-ckpdn updated: 0 ports\nI1002 13:45:37.733029       1 service.go:306] Service svc-latency-570/latency-svc-clc9z updated: 0 ports\nI1002 13:45:37.763734       1 service.go:306] Service svc-latency-570/latency-svc-cmp64 updated: 0 ports\nI1002 13:45:37.791437       1 service.go:306] Service svc-latency-570/latency-svc-d2bhc updated: 0 ports\nI1002 13:45:37.837968       1 service.go:306] Service svc-latency-570/latency-svc-d7fvh updated: 0 ports\nI1002 13:45:37.941714       1 service.go:306] Service svc-latency-570/latency-svc-d7gj9 updated: 0 ports\nI1002 13:45:38.002264       1 service.go:306] Service svc-latency-570/latency-svc-d7kpx updated: 0 ports\nI1002 13:45:38.063746       1 service.go:306] Service svc-latency-570/latency-svc-db9d8 updated: 0 ports\nI1002 13:45:38.102410       1 service.go:306] Service svc-latency-570/latency-svc-dt245 updated: 0 ports\nI1002 13:45:38.118757       1 service.go:306] Service svc-latency-570/latency-svc-dtgmt updated: 0 ports\nI1002 13:45:38.140819       1 service.go:306] Service svc-latency-570/latency-svc-f74g9 updated: 0 ports\nI1002 13:45:38.172954       1 service.go:306] Service svc-latency-570/latency-svc-f95ff updated: 0 ports\nI1002 13:45:38.192797       1 service.go:306] Service svc-latency-570/latency-svc-fc9rj updated: 0 ports\nI1002 13:45:38.224484       1 service.go:306] Service svc-latency-570/latency-svc-fjp5q updated: 0 ports\nI1002 13:45:38.275634       1 service.go:306] Service svc-latency-570/latency-svc-fmx56 updated: 0 ports\nI1002 13:45:38.320210       1 service.go:306] Service svc-latency-570/latency-svc-g5vsq updated: 0 ports\nI1002 13:45:38.332225       1 service.go:306] Service svc-latency-570/latency-svc-g7hh2 updated: 0 ports\nI1002 13:45:38.332269       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-dtgmt\"\nI1002 13:45:38.332283       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-fmx56\"\nI1002 13:45:38.332291       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-d7fvh\"\nI1002 13:45:38.332298       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-c2gr8\"\nI1002 13:45:38.332306       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-c2jcm\"\nI1002 13:45:38.332313       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-c64hd\"\nI1002 13:45:38.332321       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-ckpdn\"\nI1002 13:45:38.332329       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-clc9z\"\nI1002 13:45:38.332337       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-cmp64\"\nI1002 13:45:38.332345       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-d2bhc\"\nI1002 13:45:38.332353       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-d7kpx\"\nI1002 13:45:38.332360       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-dt245\"\nI1002 13:45:38.332367       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-fjp5q\"\nI1002 13:45:38.332374       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-g5vsq\"\nI1002 13:45:38.332382       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-cfdlr\"\nI1002 13:45:38.332390       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-chlmk\"\nI1002 13:45:38.332396       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-d7gj9\"\nI1002 13:45:38.332403       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-db9d8\"\nI1002 13:45:38.332411       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-f95ff\"\nI1002 13:45:38.332418       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-g7hh2\"\nI1002 13:45:38.332425       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-bzlq4\"\nI1002 13:45:38.332432       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-c2p85\"\nI1002 13:45:38.332439       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-c2svn\"\nI1002 13:45:38.332445       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-f74g9\"\nI1002 13:45:38.332453       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-fc9rj\"\nI1002 13:45:38.332610       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:38.353519       1 service.go:306] Service svc-latency-570/latency-svc-gcgsr updated: 0 ports\nI1002 13:45:38.368743       1 service.go:306] Service svc-latency-570/latency-svc-gjpn7 updated: 0 ports\nI1002 13:45:38.391751       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.468166ms\"\nI1002 13:45:38.409534       1 service.go:306] Service svc-latency-570/latency-svc-gskrc updated: 0 ports\nI1002 13:45:38.429874       1 service.go:306] Service svc-latency-570/latency-svc-gtsbh updated: 0 ports\nI1002 13:45:38.439965       1 service.go:306] Service svc-latency-570/latency-svc-gzppw updated: 0 ports\nI1002 13:45:38.447722       1 service.go:306] Service svc-latency-570/latency-svc-h6sjp updated: 0 ports\nI1002 13:45:38.465799       1 service.go:306] Service svc-latency-570/latency-svc-hlmlj updated: 0 ports\nI1002 13:45:38.480741       1 service.go:306] Service svc-latency-570/latency-svc-hvv2n updated: 0 ports\nI1002 13:45:38.491238       1 service.go:306] Service svc-latency-570/latency-svc-hxnck updated: 0 ports\nI1002 13:45:38.504045       1 service.go:306] Service svc-latency-570/latency-svc-j8xcl updated: 0 ports\nI1002 13:45:38.513001       1 service.go:306] Service svc-latency-570/latency-svc-j978v updated: 0 ports\nI1002 13:45:38.521693       1 service.go:306] Service svc-latency-570/latency-svc-j9dqt updated: 0 ports\nI1002 13:45:38.530344       1 service.go:306] Service svc-latency-570/latency-svc-jdjxc updated: 0 ports\nI1002 13:45:38.540143       1 service.go:306] Service svc-latency-570/latency-svc-jdk2w updated: 0 ports\nI1002 13:45:38.549038       1 service.go:306] Service svc-latency-570/latency-svc-jhhgt updated: 0 ports\nI1002 13:45:38.573665       1 service.go:306] Service svc-latency-570/latency-svc-jjl2p updated: 0 ports\nI1002 13:45:38.595502       1 service.go:306] Service svc-latency-570/latency-svc-jjqb8 updated: 0 ports\nI1002 13:45:38.615720       1 service.go:306] Service svc-latency-570/latency-svc-jkxwk updated: 0 ports\nI1002 13:45:38.641286       1 service.go:306] Service svc-latency-570/latency-svc-jrjgf updated: 0 ports\nI1002 13:45:38.671551       1 service.go:306] Service svc-latency-570/latency-svc-jx22p updated: 0 ports\nI1002 13:45:38.715743       1 service.go:306] Service svc-latency-570/latency-svc-k4l9l updated: 0 ports\nI1002 13:45:38.754621       1 service.go:306] Service svc-latency-570/latency-svc-k5hg8 updated: 0 ports\nI1002 13:45:38.806894       1 service.go:306] Service svc-latency-570/latency-svc-k77rz updated: 0 ports\nI1002 13:45:38.832663       1 service.go:306] Service svc-latency-570/latency-svc-k8tcf updated: 0 ports\nI1002 13:45:38.843334       1 service.go:306] Service svc-latency-570/latency-svc-kcplg updated: 0 ports\nI1002 13:45:38.867879       1 service.go:306] Service svc-latency-570/latency-svc-klp66 updated: 0 ports\nI1002 13:45:38.901720       1 service.go:306] Service svc-latency-570/latency-svc-kpt22 updated: 0 ports\nI1002 13:45:38.913691       1 service.go:306] Service svc-latency-570/latency-svc-kqqgb updated: 0 ports\nI1002 13:45:38.923299       1 service.go:306] Service svc-latency-570/latency-svc-krngd updated: 0 ports\nI1002 13:45:38.936979       1 service.go:306] Service svc-latency-570/latency-svc-kxldt updated: 0 ports\nI1002 13:45:38.949604       1 service.go:306] Service svc-latency-570/latency-svc-l5k5t updated: 0 ports\nI1002 13:45:38.966417       1 service.go:306] Service svc-latency-570/latency-svc-l6ww2 updated: 0 ports\nI1002 13:45:39.012623       1 service.go:306] Service svc-latency-570/latency-svc-l7fk2 updated: 0 ports\nI1002 13:45:39.025558       1 service.go:306] Service svc-latency-570/latency-svc-l7mlq updated: 0 ports\nI1002 13:45:39.042724       1 service.go:306] Service svc-latency-570/latency-svc-lds2d updated: 0 ports\nI1002 13:45:39.068986       1 service.go:306] Service svc-latency-570/latency-svc-llsjt updated: 0 ports\nI1002 13:45:39.145019       1 service.go:306] Service svc-latency-570/latency-svc-llwph updated: 0 ports\nI1002 13:45:39.206383       1 service.go:306] Service svc-latency-570/latency-svc-llx24 updated: 0 ports\nI1002 13:45:39.264194       1 service.go:306] Service svc-latency-570/latency-svc-lmvfn updated: 0 ports\nI1002 13:45:39.321365       1 service.go:306] Service svc-latency-570/latency-svc-ln4bn updated: 0 ports\nI1002 13:45:39.321407       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-k4l9l\"\nI1002 13:45:39.321425       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-kcplg\"\nI1002 13:45:39.321433       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-l5k5t\"\nI1002 13:45:39.321442       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-j978v\"\nI1002 13:45:39.321450       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-jx22p\"\nI1002 13:45:39.321457       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-kxldt\"\nI1002 13:45:39.321465       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-lmvfn\"\nI1002 13:45:39.321489       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-hxnck\"\nI1002 13:45:39.321495       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-jrjgf\"\nI1002 13:45:39.321503       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-kpt22\"\nI1002 13:45:39.321510       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-lds2d\"\nI1002 13:45:39.321520       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-jkxwk\"\nI1002 13:45:39.321527       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-llsjt\"\nI1002 13:45:39.321535       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-gtsbh\"\nI1002 13:45:39.321544       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-hvv2n\"\nI1002 13:45:39.321555       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-jdjxc\"\nI1002 13:45:39.321568       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-jjqb8\"\nI1002 13:45:39.321575       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-k8tcf\"\nI1002 13:45:39.321582       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-l7fk2\"\nI1002 13:45:39.321590       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-gcgsr\"\nI1002 13:45:39.321597       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-h6sjp\"\nI1002 13:45:39.321604       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-jjl2p\"\nI1002 13:45:39.321611       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-k77rz\"\nI1002 13:45:39.321618       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-k5hg8\"\nI1002 13:45:39.321628       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-l7mlq\"\nI1002 13:45:39.321638       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-llx24\"\nI1002 13:45:39.321645       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-ln4bn\"\nI1002 13:45:39.321652       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-gjpn7\"\nI1002 13:45:39.321659       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-gskrc\"\nI1002 13:45:39.321665       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-gzppw\"\nI1002 13:45:39.321672       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-hlmlj\"\nI1002 13:45:39.321681       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-j9dqt\"\nI1002 13:45:39.321688       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-jhhgt\"\nI1002 13:45:39.321696       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-krngd\"\nI1002 13:45:39.321704       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-l6ww2\"\nI1002 13:45:39.321711       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-llwph\"\nI1002 13:45:39.321718       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-j8xcl\"\nI1002 13:45:39.321725       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-jdk2w\"\nI1002 13:45:39.321731       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-klp66\"\nI1002 13:45:39.321737       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-kqqgb\"\nI1002 13:45:39.321870       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:39.371613       1 service.go:306] Service svc-latency-570/latency-svc-lsvw8 updated: 0 ports\nI1002 13:45:39.449686       1 service.go:306] Service svc-latency-570/latency-svc-lv5h6 updated: 0 ports\nI1002 13:45:39.516007       1 service.go:306] Service svc-latency-570/latency-svc-lzbxd updated: 0 ports\nI1002 13:45:39.579013       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"257.590134ms\"\nI1002 13:45:39.582708       1 service.go:306] Service svc-latency-570/latency-svc-m2p64 updated: 0 ports\nI1002 13:45:39.638105       1 service.go:306] Service svc-latency-570/latency-svc-m4kvw updated: 0 ports\nI1002 13:45:39.681582       1 service.go:306] Service svc-latency-570/latency-svc-m5g8b updated: 0 ports\nI1002 13:45:39.723086       1 service.go:306] Service svc-latency-570/latency-svc-mczjv updated: 0 ports\nI1002 13:45:39.763947       1 service.go:306] Service svc-latency-570/latency-svc-mj4xl updated: 0 ports\nI1002 13:45:39.790184       1 service.go:306] Service svc-latency-570/latency-svc-mjhjr updated: 0 ports\nI1002 13:45:39.804436       1 service.go:306] Service svc-latency-570/latency-svc-mkm42 updated: 0 ports\nI1002 13:45:39.823043       1 service.go:306] Service svc-latency-570/latency-svc-mlnrr updated: 0 ports\nI1002 13:45:39.841250       1 service.go:306] Service svc-latency-570/latency-svc-mnq5s updated: 0 ports\nI1002 13:45:39.852768       1 service.go:306] Service svc-latency-570/latency-svc-mrktr updated: 0 ports\nI1002 13:45:39.887959       1 service.go:306] Service svc-latency-570/latency-svc-mtxbd updated: 0 ports\nI1002 13:45:39.910903       1 service.go:306] Service svc-latency-570/latency-svc-mv244 updated: 0 ports\nI1002 13:45:39.930028       1 service.go:306] Service svc-latency-570/latency-svc-n4htl updated: 0 ports\nI1002 13:45:39.948376       1 service.go:306] Service svc-latency-570/latency-svc-n7gdm updated: 0 ports\nI1002 13:45:39.973587       1 service.go:306] Service svc-latency-570/latency-svc-nfmlx updated: 0 ports\nI1002 13:45:39.999758       1 service.go:306] Service svc-latency-570/latency-svc-nh5g8 updated: 0 ports\nI1002 13:45:40.018701       1 service.go:306] Service svc-latency-570/latency-svc-p8h46 updated: 0 ports\nI1002 13:45:40.037420       1 service.go:306] Service svc-latency-570/latency-svc-pkjfx updated: 0 ports\nI1002 13:45:40.074514       1 service.go:306] Service svc-latency-570/latency-svc-pm657 updated: 0 ports\nI1002 13:45:40.107032       1 service.go:306] Service svc-latency-570/latency-svc-pnmw4 updated: 0 ports\nI1002 13:45:40.128219       1 service.go:306] Service svc-latency-570/latency-svc-pr2fw updated: 0 ports\nI1002 13:45:40.192794       1 service.go:306] Service svc-latency-570/latency-svc-pw9hd updated: 0 ports\nI1002 13:45:40.229493       1 service.go:306] Service svc-latency-570/latency-svc-q6t7g updated: 0 ports\nI1002 13:45:40.250053       1 service.go:306] Service svc-latency-570/latency-svc-qh9p8 updated: 0 ports\nI1002 13:45:40.279924       1 service.go:306] Service svc-latency-570/latency-svc-qnsbp updated: 0 ports\nI1002 13:45:40.316065       1 service.go:306] Service svc-latency-570/latency-svc-qrxlv updated: 0 ports\nI1002 13:45:40.365876       1 service.go:306] Service svc-latency-570/latency-svc-qsnp9 updated: 0 ports\nI1002 13:45:40.365916       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mczjv\"\nI1002 13:45:40.365930       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mtxbd\"\nI1002 13:45:40.365938       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-pkjfx\"\nI1002 13:45:40.365946       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-m4kvw\"\nI1002 13:45:40.365955       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mnq5s\"\nI1002 13:45:40.365964       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-n7gdm\"\nI1002 13:45:40.365982       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-nh5g8\"\nI1002 13:45:40.365990       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-lsvw8\"\nI1002 13:45:40.365998       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-lzbxd\"\nI1002 13:45:40.366006       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-n4htl\"\nI1002 13:45:40.366013       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-p8h46\"\nI1002 13:45:40.366020       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-pw9hd\"\nI1002 13:45:40.366027       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mkm42\"\nI1002 13:45:40.366036       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-pm657\"\nI1002 13:45:40.366043       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-pnmw4\"\nI1002 13:45:40.366050       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-qnsbp\"\nI1002 13:45:40.366057       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-qsnp9\"\nI1002 13:45:40.366064       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-m2p64\"\nI1002 13:45:40.366074       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mj4xl\"\nI1002 13:45:40.366082       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mlnrr\"\nI1002 13:45:40.366089       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-m5g8b\"\nI1002 13:45:40.366097       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mrktr\"\nI1002 13:45:40.366104       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-nfmlx\"\nI1002 13:45:40.366112       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-pr2fw\"\nI1002 13:45:40.366120       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mjhjr\"\nI1002 13:45:40.366128       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-q6t7g\"\nI1002 13:45:40.366136       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-qh9p8\"\nI1002 13:45:40.366143       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-qrxlv\"\nI1002 13:45:40.366154       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-lv5h6\"\nI1002 13:45:40.366161       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-mv244\"\nI1002 13:45:40.366345       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:40.404392       1 service.go:306] Service svc-latency-570/latency-svc-qtn9g updated: 0 ports\nI1002 13:45:40.422754       1 service.go:306] Service svc-latency-570/latency-svc-qzxv9 updated: 0 ports\nI1002 13:45:40.472182       1 service.go:306] Service svc-latency-570/latency-svc-r4bvz updated: 0 ports\nI1002 13:45:40.520837       1 service.go:306] Service svc-latency-570/latency-svc-r4vcx updated: 0 ports\nI1002 13:45:40.526132       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"160.206522ms\"\nI1002 13:45:40.580485       1 service.go:306] Service svc-latency-570/latency-svc-rhdsw updated: 0 ports\nI1002 13:45:40.597452       1 service.go:306] Service svc-latency-570/latency-svc-rnthr updated: 0 ports\nI1002 13:45:40.613420       1 service.go:306] Service svc-latency-570/latency-svc-sbcv7 updated: 0 ports\nI1002 13:45:40.622245       1 service.go:306] Service svc-latency-570/latency-svc-sbtb2 updated: 0 ports\nI1002 13:45:40.632063       1 service.go:306] Service svc-latency-570/latency-svc-sk4pz updated: 0 ports\nI1002 13:45:40.642375       1 service.go:306] Service svc-latency-570/latency-svc-svndg updated: 0 ports\nI1002 13:45:40.655802       1 service.go:306] Service svc-latency-570/latency-svc-szmlp updated: 0 ports\nI1002 13:45:40.669622       1 service.go:306] Service svc-latency-570/latency-svc-t9wbz updated: 0 ports\nI1002 13:45:40.677767       1 service.go:306] Service svc-latency-570/latency-svc-tgjk2 updated: 0 ports\nI1002 13:45:40.692119       1 service.go:306] Service svc-latency-570/latency-svc-tgmwl updated: 0 ports\nI1002 13:45:40.702716       1 service.go:306] Service svc-latency-570/latency-svc-th6cj updated: 0 ports\nI1002 13:45:40.715177       1 service.go:306] Service svc-latency-570/latency-svc-tt92w updated: 0 ports\nI1002 13:45:40.725356       1 service.go:306] Service svc-latency-570/latency-svc-v5wg4 updated: 0 ports\nI1002 13:45:40.739318       1 service.go:306] Service svc-latency-570/latency-svc-v6mwc updated: 0 ports\nI1002 13:45:40.766314       1 service.go:306] Service svc-latency-570/latency-svc-vft9p updated: 0 ports\nI1002 13:45:40.778586       1 service.go:306] Service svc-latency-570/latency-svc-vl98x updated: 0 ports\nI1002 13:45:40.791264       1 service.go:306] Service svc-latency-570/latency-svc-vll92 updated: 0 ports\nI1002 13:45:40.803573       1 service.go:306] Service svc-latency-570/latency-svc-vpmcv updated: 0 ports\nI1002 13:45:40.813250       1 service.go:306] Service svc-latency-570/latency-svc-wc575 updated: 0 ports\nI1002 13:45:40.822122       1 service.go:306] Service svc-latency-570/latency-svc-wcvbt updated: 0 ports\nI1002 13:45:40.831549       1 service.go:306] Service svc-latency-570/latency-svc-wd64f updated: 0 ports\nI1002 13:45:40.847296       1 service.go:306] Service svc-latency-570/latency-svc-wjrs2 updated: 0 ports\nI1002 13:45:40.857314       1 service.go:306] Service svc-latency-570/latency-svc-wllns updated: 0 ports\nI1002 13:45:40.868303       1 service.go:306] Service svc-latency-570/latency-svc-wvhqs updated: 0 ports\nI1002 13:45:40.881971       1 service.go:306] Service svc-latency-570/latency-svc-wxprb updated: 0 ports\nI1002 13:45:40.890361       1 service.go:306] Service svc-latency-570/latency-svc-x5cxg updated: 0 ports\nI1002 13:45:40.901786       1 service.go:306] Service svc-latency-570/latency-svc-x6xkt updated: 0 ports\nI1002 13:45:40.914783       1 service.go:306] Service svc-latency-570/latency-svc-x79xw updated: 0 ports\nI1002 13:45:40.947778       1 service.go:306] Service svc-latency-570/latency-svc-x8wq6 updated: 0 ports\nI1002 13:45:40.965096       1 service.go:306] Service svc-latency-570/latency-svc-xcptf updated: 0 ports\nI1002 13:45:40.972763       1 service.go:306] Service svc-latency-570/latency-svc-xschq updated: 0 ports\nI1002 13:45:40.994028       1 service.go:306] Service svc-latency-570/latency-svc-xxcl6 updated: 0 ports\nI1002 13:45:41.014172       1 service.go:306] Service svc-latency-570/latency-svc-z2fbd updated: 0 ports\nI1002 13:45:41.014235       1 service.go:306] Service resourcequota-3659/test-service updated: 1 ports\nI1002 13:45:41.019297       1 service.go:306] Service svc-latency-570/latency-svc-z4b5t updated: 0 ports\nI1002 13:45:41.033103       1 service.go:306] Service svc-latency-570/latency-svc-zbk5b updated: 0 ports\nI1002 13:45:41.042613       1 service.go:306] Service svc-latency-570/latency-svc-zhgk6 updated: 0 ports\nI1002 13:45:41.061877       1 service.go:306] Service svc-latency-570/latency-svc-zj4z2 updated: 0 ports\nI1002 13:45:41.073490       1 service.go:306] Service svc-latency-570/latency-svc-zj6ds updated: 0 ports\nI1002 13:45:41.083406       1 service.go:306] Service svc-latency-570/latency-svc-zm2wt updated: 0 ports\nI1002 13:45:41.218947       1 service.go:306] Service resourcequota-3659/test-service-np updated: 1 ports\nI1002 13:45:41.526521       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-wllns\"\nI1002 13:45:41.526567       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-x8wq6\"\nI1002 13:45:41.526578       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-zhgk6\"\nI1002 13:45:41.526586       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-qtn9g\"\nI1002 13:45:41.526593       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-rnthr\"\nI1002 13:45:41.526662       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-sbtb2\"\nI1002 13:45:41.526671       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-sk4pz\"\nI1002 13:45:41.526678       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-vl98x\"\nI1002 13:45:41.526697       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-vll92\"\nI1002 13:45:41.526717       1 service.go:421] Adding new service port \"resourcequota-3659/test-service-np\" at 100.64.111.217:80/TCP\nI1002 13:45:41.526727       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-qzxv9\"\nI1002 13:45:41.526735       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-sbcv7\"\nI1002 13:45:41.526743       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-szmlp\"\nI1002 13:45:41.526750       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-v5wg4\"\nI1002 13:45:41.526758       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-xschq\"\nI1002 13:45:41.526767       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-vpmcv\"\nI1002 13:45:41.526776       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-wvhqs\"\nI1002 13:45:41.526785       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-wxprb\"\nI1002 13:45:41.526797       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-xcptf\"\nI1002 13:45:41.526805       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-tt92w\"\nI1002 13:45:41.526814       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-x5cxg\"\nI1002 13:45:41.526830       1 service.go:421] Adding new service port \"resourcequota-3659/test-service\" at 100.66.1.82:80/TCP\nI1002 13:45:41.526837       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-x6xkt\"\nI1002 13:45:41.526845       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-zbk5b\"\nI1002 13:45:41.526854       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-zj6ds\"\nI1002 13:45:41.526862       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-zm2wt\"\nI1002 13:45:41.526870       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-tgmwl\"\nI1002 13:45:41.526877       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-vft9p\"\nI1002 13:45:41.526885       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-wc575\"\nI1002 13:45:41.526893       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-wjrs2\"\nI1002 13:45:41.526901       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-v6mwc\"\nI1002 13:45:41.526909       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-wd64f\"\nI1002 13:45:41.526917       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-svndg\"\nI1002 13:45:41.526925       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-t9wbz\"\nI1002 13:45:41.526935       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-tgjk2\"\nI1002 13:45:41.526943       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-th6cj\"\nI1002 13:45:41.526954       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-z4b5t\"\nI1002 13:45:41.526973       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-zj4z2\"\nI1002 13:45:41.526982       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-r4bvz\"\nI1002 13:45:41.526992       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-rhdsw\"\nI1002 13:45:41.527036       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-wcvbt\"\nI1002 13:45:41.527061       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-z2fbd\"\nI1002 13:45:41.527080       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-r4vcx\"\nI1002 13:45:41.527089       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-x79xw\"\nI1002 13:45:41.527097       1 service.go:446] Removing service port \"svc-latency-570/latency-svc-xxcl6\"\nI1002 13:45:41.527250       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:41.576666       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for resourcequota-3659/test-service-np\\\" (:31321/tcp4)\"\nI1002 13:45:41.584971       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.462766ms\"\nI1002 13:45:43.805622       1 service.go:306] Service resourcequota-3659/test-service updated: 0 ports\nI1002 13:45:43.805658       1 service.go:446] Removing service port \"resourcequota-3659/test-service\"\nI1002 13:45:43.805777       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:44.024349       1 service.go:306] Service resourcequota-3659/test-service-np updated: 0 ports\nI1002 13:45:44.027047       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"221.368769ms\"\nI1002 13:45:44.027083       1 service.go:446] Removing service port \"resourcequota-3659/test-service-np\"\nI1002 13:45:44.027223       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:44.059890       1 service.go:306] Service pods-5225/fooservice updated: 0 ports\nI1002 13:45:44.125457       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"98.357093ms\"\nI1002 13:45:45.125619       1 service.go:446] Removing service port \"pods-5225/fooservice\"\nI1002 13:45:45.125819       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:45.174011       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.405853ms\"\nI1002 13:45:45.455886       1 service.go:306] Service services-4762/test-service-wfpmf updated: 1 ports\nI1002 13:45:46.029806       1 service.go:306] Service services-4762/test-service-wfpmf updated: 1 ports\nI1002 13:45:46.030122       1 service.go:421] Adding new service port \"services-4762/test-service-wfpmf:http\" at 100.67.200.137:80/TCP\nI1002 13:45:46.030316       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:46.141403       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"111.312691ms\"\nI1002 13:45:47.376416       1 service.go:306] Service services-4762/test-service-wfpmf updated: 0 ports\nI1002 13:45:47.376458       1 service.go:446] Removing service port \"services-4762/test-service-wfpmf:http\"\nI1002 13:45:47.376578       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:47.428520       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.047688ms\"\nI1002 13:45:49.739800       1 service.go:306] Service volume-expand-3080-321/csi-hostpathplugin updated: 1 ports\nI1002 13:45:49.739914       1 service.go:421] Adding new service port \"volume-expand-3080-321/csi-hostpathplugin:dummy\" at 100.69.140.87:12345/TCP\nI1002 13:45:49.740142       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:49.949077       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"209.154158ms\"\nI1002 13:45:49.949219       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:50.105255       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"156.129503ms\"\nI1002 13:45:53.008291       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:53.057967       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.777644ms\"\nI1002 13:45:56.747013       1 service.go:306] Service services-3680/service-headless-toggled updated: 0 ports\nI1002 13:45:56.747062       1 service.go:446] Removing service port \"services-3680/service-headless-toggled\"\nI1002 13:45:56.747202       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:45:56.816680       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.587576ms\"\nI1002 13:46:04.022553       1 service.go:306] Service services-3680/service-headless-toggled updated: 1 ports\nI1002 13:46:04.022604       1 service.go:421] Adding new service port \"services-3680/service-headless-toggled\" at 100.71.79.196:80/TCP\nI1002 13:46:04.022702       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:04.080696       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.083544ms\"\nI1002 13:46:07.648738       1 service.go:306] Service proxy-692/test-service updated: 1 ports\nI1002 13:46:07.648784       1 service.go:421] Adding new service port \"proxy-692/test-service\" at 100.67.245.110:80/TCP\nI1002 13:46:07.648909       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:07.805335       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"156.457984ms\"\nI1002 13:46:07.805725       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:07.947094       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"141.6026ms\"\nI1002 13:46:09.955213       1 service.go:306] Service kubectl-4151/agnhost-replica updated: 1 ports\nI1002 13:46:09.955433       1 service.go:421] Adding new service port \"kubectl-4151/agnhost-replica\" at 100.70.48.29:6379/TCP\nI1002 13:46:09.955656       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:10.054230       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"98.772098ms\"\nI1002 13:46:10.054393       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:10.121119       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.835ms\"\nI1002 13:46:10.988802       1 service.go:306] Service kubectl-4151/agnhost-primary updated: 1 ports\nI1002 13:46:10.988857       1 service.go:421] Adding new service port \"kubectl-4151/agnhost-primary\" at 100.71.115.68:6379/TCP\nI1002 13:46:10.988974       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:11.070785       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"81.912403ms\"\nI1002 13:46:11.236703       1 service.go:306] Service ephemeral-2498-7464/csi-hostpathplugin updated: 1 ports\nI1002 13:46:12.024106       1 service.go:306] Service kubectl-4151/frontend updated: 1 ports\nI1002 13:46:12.024173       1 service.go:421] Adding new service port \"ephemeral-2498-7464/csi-hostpathplugin:dummy\" at 100.70.159.192:12345/TCP\nI1002 13:46:12.024190       1 service.go:421] Adding new service port \"kubectl-4151/frontend\" at 100.66.29.113:80/TCP\nI1002 13:46:12.024328       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:12.094838       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.65743ms\"\nI1002 13:46:13.095472       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:13.152096       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.722884ms\"\nI1002 13:46:13.975235       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:14.082373       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"107.257543ms\"\nI1002 13:46:15.135960       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:15.313557       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"177.731841ms\"\nI1002 13:46:15.871915       1 service.go:306] Service proxy-692/test-service updated: 0 ports\nI1002 13:46:16.314300       1 service.go:446] Removing service port \"proxy-692/test-service\"\nI1002 13:46:16.314521       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:16.385143       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.838459ms\"\nI1002 13:46:17.102561       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:17.177982       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.519071ms\"\nI1002 13:46:18.178332       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:18.235101       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.898073ms\"\nI1002 13:46:20.938856       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:21.039617       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"99.902587ms\"\nI1002 13:46:26.739058       1 service.go:306] Service kubectl-4151/agnhost-replica updated: 0 ports\nI1002 13:46:26.739114       1 service.go:446] Removing service port \"kubectl-4151/agnhost-replica\"\nI1002 13:46:26.739228       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:26.814288       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.152821ms\"\nI1002 13:46:26.814449       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:26.898656       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.316415ms\"\nI1002 13:46:27.602730       1 service.go:306] Service kubectl-4151/agnhost-primary updated: 0 ports\nI1002 13:46:27.899350       1 service.go:446] Removing service port \"kubectl-4151/agnhost-primary\"\nI1002 13:46:27.899510       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:28.057619       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"158.265612ms\"\nI1002 13:46:28.454902       1 service.go:306] Service kubectl-4151/frontend updated: 0 ports\nI1002 13:46:29.057925       1 service.go:446] Removing service port \"kubectl-4151/frontend\"\nI1002 13:46:29.058106       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:29.122115       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.204872ms\"\nI1002 13:46:29.582007       1 service.go:306] Service services-3680/service-headless-toggled updated: 0 ports\nI1002 13:46:30.122282       1 service.go:446] Removing service port \"services-3680/service-headless-toggled\"\nI1002 13:46:30.122469       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:30.171854       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.573065ms\"\nI1002 13:46:31.525571       1 service.go:306] Service ephemeral-629-6667/csi-hostpathplugin updated: 1 ports\nI1002 13:46:31.525622       1 service.go:421] Adding new service port \"ephemeral-629-6667/csi-hostpathplugin:dummy\" at 100.69.198.222:12345/TCP\nI1002 13:46:31.525743       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:31.634338       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"108.704373ms\"\nI1002 13:46:32.348533       1 service.go:306] Service services-6239/hairpin-test updated: 1 ports\nI1002 13:46:32.348582       1 service.go:421] Adding new service port \"services-6239/hairpin-test\" at 100.70.89.104:8080/TCP\nI1002 13:46:32.348706       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:32.402785       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.194917ms\"\nI1002 13:46:33.403272       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:33.485525       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.371438ms\"\nI1002 13:46:33.703524       1 service.go:306] Service services-2541/affinity-nodeport-transition updated: 1 ports\nI1002 13:46:34.048354       1 service.go:421] Adding new service port \"services-2541/affinity-nodeport-transition\" at 100.66.81.115:80/TCP\nI1002 13:46:34.048503       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:34.205113       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-2541/affinity-nodeport-transition\\\" (:32151/tcp4)\"\nI1002 13:46:34.212492       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"164.130493ms\"\nI1002 13:46:35.071941       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:35.152634       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.807869ms\"\nI1002 13:46:36.152941       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:36.217117       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.275ms\"\nI1002 13:46:37.196778       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:37.312767       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"116.119446ms\"\nI1002 13:46:37.917107       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:37.978020       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.03975ms\"\nI1002 13:46:41.813934       1 service.go:306] Service volume-expand-3080-321/csi-hostpathplugin updated: 0 ports\nI1002 13:46:41.813974       1 service.go:446] Removing service port \"volume-expand-3080-321/csi-hostpathplugin:dummy\"\nI1002 13:46:41.814102       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:42.077260       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"263.266246ms\"\nI1002 13:46:42.077427       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:42.244348       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"167.033604ms\"\nI1002 13:46:45.979550       1 service.go:306] Service services-6239/hairpin-test updated: 0 ports\nI1002 13:46:45.979593       1 service.go:446] Removing service port \"services-6239/hairpin-test\"\nI1002 13:46:45.979814       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:46.290341       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"310.729562ms\"\nI1002 13:46:46.290503       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:46.489495       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"199.097149ms\"\nI1002 13:46:49.554146       1 service.go:306] Service volumemode-55-1766/csi-hostpathplugin updated: 1 ports\nI1002 13:46:49.554298       1 service.go:421] Adding new service port \"volumemode-55-1766/csi-hostpathplugin:dummy\" at 100.71.6.189:12345/TCP\nI1002 13:46:49.554518       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:49.616549       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.247376ms\"\nI1002 13:46:49.616805       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:49.680284       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.690041ms\"\nI1002 13:46:56.584570       1 service.go:306] Service services-2541/affinity-nodeport-transition updated: 1 ports\nI1002 13:46:56.584626       1 service.go:423] Updating existing service port \"services-2541/affinity-nodeport-transition\" at 100.66.81.115:80/TCP\nI1002 13:46:56.584749       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:56.639133       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.501918ms\"\nI1002 13:46:57.943503       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:58.017658       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.25034ms\"\nI1002 13:46:59.043377       1 service.go:306] Service services-2541/affinity-nodeport-transition updated: 1 ports\nI1002 13:46:59.043428       1 service.go:423] Updating existing service port \"services-2541/affinity-nodeport-transition\" at 100.66.81.115:80/TCP\nI1002 13:46:59.043554       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:46:59.123946       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.503388ms\"\nI1002 13:47:00.956686       1 service.go:306] Service services-2983/clusterip-service updated: 1 ports\nI1002 13:47:00.956914       1 service.go:421] Adding new service port \"services-2983/clusterip-service\" at 100.71.171.111:80/TCP\nI1002 13:47:00.957138       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:01.079995       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"123.080811ms\"\nI1002 13:47:01.080234       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:01.157651       1 service.go:306] Service services-2983/externalsvc updated: 1 ports\nI1002 13:47:01.196496       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"116.453185ms\"\nI1002 13:47:01.957799       1 service.go:421] Adding new service port \"services-2983/externalsvc\" at 100.71.109.143:80/TCP\nI1002 13:47:01.958193       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:02.126323       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"168.533403ms\"\nI1002 13:47:03.127472       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:03.353443       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"226.069925ms\"\nI1002 13:47:04.354668       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:04.544650       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"190.172848ms\"\nI1002 13:47:12.906802       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:12.982823       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.107497ms\"\nI1002 13:47:13.518950       1 service.go:306] Service services-555/nodeport-reuse updated: 1 ports\nI1002 13:47:13.518997       1 service.go:421] Adding new service port \"services-555/nodeport-reuse\" at 100.64.99.182:80/TCP\nI1002 13:47:13.519098       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:13.584822       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-555/nodeport-reuse\\\" (:30902/tcp4)\"\nI1002 13:47:13.598729       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.717351ms\"\nI1002 13:47:13.711718       1 service.go:306] Service services-555/nodeport-reuse updated: 0 ports\nI1002 13:47:14.133421       1 service.go:306] Service services-2983/clusterip-service updated: 0 ports\nI1002 13:47:14.133483       1 service.go:446] Removing service port \"services-555/nodeport-reuse\"\nI1002 13:47:14.133499       1 service.go:446] Removing service port \"services-2983/clusterip-service\"\nI1002 13:47:14.133670       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:14.192564       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.083537ms\"\nI1002 13:47:14.836363       1 service.go:306] Service services-2541/affinity-nodeport-transition updated: 0 ports\nI1002 13:47:15.192718       1 service.go:446] Removing service port \"services-2541/affinity-nodeport-transition\"\nI1002 13:47:15.192898       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:15.242465       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.75301ms\"\nI1002 13:47:16.484973       1 service.go:306] Service services-9963/affinity-nodeport updated: 1 ports\nI1002 13:47:16.485019       1 service.go:421] Adding new service port \"services-9963/affinity-nodeport\" at 100.71.13.82:80/TCP\nI1002 13:47:16.485316       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:16.543385       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-9963/affinity-nodeport\\\" (:30988/tcp4)\"\nI1002 13:47:16.556753       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"71.72186ms\"\nI1002 13:47:17.558244       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:17.741562       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"183.487643ms\"\nI1002 13:47:18.449829       1 service.go:306] Service services-555/nodeport-reuse updated: 1 ports\nI1002 13:47:18.450014       1 service.go:421] Adding new service port \"services-555/nodeport-reuse\" at 100.65.17.130:80/TCP\nI1002 13:47:18.450254       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:18.497684       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-555/nodeport-reuse\\\" (:30902/tcp4)\"\nI1002 13:47:18.505219       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.231819ms\"\nI1002 13:47:18.640497       1 service.go:306] Service services-555/nodeport-reuse updated: 0 ports\nI1002 13:47:19.506203       1 service.go:446] Removing service port \"services-555/nodeport-reuse\"\nI1002 13:47:19.506911       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:19.560900       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.698133ms\"\nI1002 13:47:26.701383       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:26.787599       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.343263ms\"\nI1002 13:47:27.272189       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:27.324648       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.53515ms\"\nI1002 13:47:27.926713       1 service.go:306] Service ephemeral-2498-7464/csi-hostpathplugin updated: 0 ports\nI1002 13:47:27.926759       1 service.go:446] Removing service port \"ephemeral-2498-7464/csi-hostpathplugin:dummy\"\nI1002 13:47:27.926890       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:27.981991       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.218404ms\"\nI1002 13:47:28.982386       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:29.039346       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.111884ms\"\nI1002 13:47:39.838905       1 service.go:306] Service services-2983/externalsvc updated: 0 ports\nI1002 13:47:39.838944       1 service.go:446] Removing service port \"services-2983/externalsvc\"\nI1002 13:47:39.839073       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:39.909833       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.87059ms\"\nI1002 13:47:39.909978       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:39.988992       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.105874ms\"\nI1002 13:47:42.209168       1 service.go:306] Service volumemode-55-1766/csi-hostpathplugin updated: 0 ports\nI1002 13:47:42.209216       1 service.go:446] Removing service port \"volumemode-55-1766/csi-hostpathplugin:dummy\"\nI1002 13:47:42.209345       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:42.271176       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.944285ms\"\nI1002 13:47:42.271538       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:42.330716       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.494941ms\"\nI1002 13:47:46.167501       1 service.go:306] Service ephemeral-629-6667/csi-hostpathplugin updated: 0 ports\nI1002 13:47:46.167639       1 service.go:446] Removing service port \"ephemeral-629-6667/csi-hostpathplugin:dummy\"\nI1002 13:47:46.167796       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:46.236638       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.99096ms\"\nI1002 13:47:46.236906       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:46.322055       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"85.365528ms\"\nI1002 13:47:47.864420       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:47.886242       1 service.go:306] Service services-6124/affinity-clusterip-timeout updated: 1 ports\nI1002 13:47:47.939546       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.259554ms\"\nI1002 13:47:48.939695       1 service.go:421] Adding new service port \"services-6124/affinity-clusterip-timeout\" at 100.68.176.161:80/TCP\nI1002 13:47:48.939929       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:48.994950       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.287289ms\"\nI1002 13:47:50.018602       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:50.226838       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"208.357354ms\"\nI1002 13:47:50.299764       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:50.358338       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.759028ms\"\nI1002 13:47:53.404222       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:53.455957       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.84224ms\"\nI1002 13:47:57.503386       1 service.go:306] Service services-9079/nodeport-range-test updated: 1 ports\nI1002 13:47:57.503437       1 service.go:421] Adding new service port \"services-9079/nodeport-range-test\" at 100.69.4.61:80/TCP\nI1002 13:47:57.503563       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:57.632953       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-9079/nodeport-range-test\\\" (:32665/tcp4)\"\nI1002 13:47:57.645400       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"141.940932ms\"\nI1002 13:47:57.645553       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:57.801975       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"156.521455ms\"\nI1002 13:47:58.076236       1 service.go:306] Service services-9079/nodeport-range-test updated: 0 ports\nI1002 13:47:58.803002       1 service.go:446] Removing service port \"services-9079/nodeport-range-test\"\nI1002 13:47:58.803137       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:58.857492       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.476528ms\"\nI1002 13:47:58.936518       1 service.go:306] Service services-9963/affinity-nodeport updated: 0 ports\nI1002 13:47:59.859433       1 service.go:446] Removing service port \"services-9963/affinity-nodeport\"\nI1002 13:47:59.859590       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:47:59.911740       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.322548ms\"\nI1002 13:48:13.800888       1 service.go:306] Service webhook-8926/e2e-test-webhook updated: 1 ports\nI1002 13:48:13.800938       1 service.go:421] Adding new service port \"webhook-8926/e2e-test-webhook\" at 100.64.6.19:8443/TCP\nI1002 13:48:13.801034       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:48:13.880020       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.068081ms\"\nI1002 13:48:13.880170       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:48:14.160305       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"280.229303ms\"\nI1002 13:48:15.594771       1 service.go:306] Service volume-966-8579/csi-hostpathplugin updated: 1 ports\nI1002 13:48:15.594822       1 service.go:421] Adding new service port \"volume-966-8579/csi-hostpathplugin:dummy\" at 100.70.124.139:12345/TCP\nI1002 13:48:15.595038       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:48:15.649151       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.297869ms\"\nI1002 13:48:16.649413       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:48:16.706838       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.537626ms\"\nI1002 13:48:31.485688       1 service.go:306] Service webhook-8926/e2e-test-webhook updated: 0 ports\nI1002 13:48:31.485726       1 service.go:446] Removing service port \"webhook-8926/e2e-test-webhook\"\nI1002 13:48:31.485944       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:48:31.560654       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.912774ms\"\nI1002 13:48:31.582771       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:48:31.663104       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.406448ms\"\nI1002 13:48:32.736766       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:48:32.873606       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"136.982517ms\"\nW1002 13:48:51.508145       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingsxjgk\nW1002 13:48:51.698139       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ings7n4n\nW1002 13:48:51.889378       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingrcfsq\nW1002 13:48:53.028055       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingrcfsq\nW1002 13:48:53.407869       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingrcfsq\nW1002 13:48:53.598471       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingrcfsq\nW1002 13:48:54.169055       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ings7n4n\nW1002 13:48:54.173303       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingsxjgk\nI1002 13:49:05.582140       1 service.go:306] Service services-6299/nodeport-update-service updated: 1 ports\nI1002 13:49:05.582334       1 service.go:421] Adding new service port \"services-6299/nodeport-update-service\" at 100.68.120.223:80/TCP\nI1002 13:49:05.582553       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:05.697498       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"115.158474ms\"\nI1002 13:49:05.697655       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:05.929714       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"232.139309ms\"\nI1002 13:49:05.960309       1 service.go:306] Service services-6299/nodeport-update-service updated: 1 ports\nI1002 13:49:06.930268       1 service.go:421] Adding new service port \"services-6299/nodeport-update-service:tcp-port\" at 100.68.120.223:80/TCP\nI1002 13:49:06.930301       1 service.go:446] Removing service port \"services-6299/nodeport-update-service\"\nI1002 13:49:06.930580       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:07.031402       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-6299/nodeport-update-service:tcp-port\\\" (:31458/tcp4)\"\nI1002 13:49:07.042387       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"112.163617ms\"\nI1002 13:49:08.170739       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:08.351778       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"181.156347ms\"\nI1002 13:49:17.285804       1 service.go:306] Service provisioning-4135-5848/csi-hostpathplugin updated: 1 ports\nI1002 13:49:17.285854       1 service.go:421] Adding new service port \"provisioning-4135-5848/csi-hostpathplugin:dummy\" at 100.67.36.118:12345/TCP\nI1002 13:49:17.285946       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:17.361688       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.829976ms\"\nI1002 13:49:17.361928       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:17.438976       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.239768ms\"\nI1002 13:49:18.439677       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:18.538072       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"98.566458ms\"\nI1002 13:49:19.630017       1 service.go:306] Service ephemeral-1761-3004/csi-hostpathplugin updated: 1 ports\nI1002 13:49:19.630074       1 service.go:421] Adding new service port \"ephemeral-1761-3004/csi-hostpathplugin:dummy\" at 100.69.84.33:12345/TCP\nI1002 13:49:19.630205       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:19.703927       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.845661ms\"\nI1002 13:49:20.704749       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:20.814299       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"109.654094ms\"\nI1002 13:49:21.362568       1 service.go:306] Service volume-expand-5763-2594/csi-hostpathplugin updated: 1 ports\nI1002 13:49:21.362621       1 service.go:421] Adding new service port \"volume-expand-5763-2594/csi-hostpathplugin:dummy\" at 100.64.183.162:12345/TCP\nI1002 13:49:21.362752       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:21.433805       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"71.177458ms\"\nI1002 13:49:22.434202       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:22.501933       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.869058ms\"\nI1002 13:49:25.537126       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:25.607038       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.029849ms\"\nI1002 13:49:27.060864       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:27.323794       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"263.057199ms\"\nI1002 13:49:30.908465       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:31.281711       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"373.383045ms\"\nI1002 13:49:40.977225       1 service.go:306] Service services-6299/nodeport-update-service updated: 2 ports\nI1002 13:49:40.977274       1 service.go:423] Updating existing service port \"services-6299/nodeport-update-service:tcp-port\" at 100.68.120.223:80/TCP\nI1002 13:49:40.977293       1 service.go:421] Adding new service port \"services-6299/nodeport-update-service:udp-port\" at 100.68.120.223:80/UDP\nI1002 13:49:40.977425       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:41.050110       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-6299/nodeport-update-service:udp-port\\\" (:30446/udp4)\"\nI1002 13:49:41.068243       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"90.959376ms\"\nI1002 13:49:41.068489       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"services-6299/nodeport-update-service:udp-port\" clusterIP=\"100.68.120.223\"\nI1002 13:49:41.068555       1 proxier.go:851] Stale udp service NodePort services-6299/nodeport-update-service:udp-port -> 30446\nI1002 13:49:41.068577       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:41.185129       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"116.83878ms\"\nI1002 13:49:43.791977       1 service.go:306] Service services-2177/affinity-clusterip updated: 1 ports\nI1002 13:49:43.792031       1 service.go:421] Adding new service port \"services-2177/affinity-clusterip\" at 100.66.18.203:80/TCP\nI1002 13:49:43.792142       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:43.891332       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"99.288875ms\"\nI1002 13:49:43.891493       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:43.975266       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.874605ms\"\nI1002 13:49:45.548304       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:45.954847       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"406.670236ms\"\nI1002 13:49:45.954998       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:46.046994       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"92.094768ms\"\nI1002 13:49:50.326733       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:50.634996       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"308.375673ms\"\nI1002 13:49:55.450345       1 service.go:306] Service ephemeral-8517-6530/csi-hostpathplugin updated: 1 ports\nI1002 13:49:55.450395       1 service.go:421] Adding new service port \"ephemeral-8517-6530/csi-hostpathplugin:dummy\" at 100.66.124.31:12345/TCP\nI1002 13:49:55.450538       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:55.560314       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"109.906963ms\"\nI1002 13:49:55.560491       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:55.657981       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"97.609462ms\"\nI1002 13:49:59.098724       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:59.236707       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"138.139335ms\"\nI1002 13:49:59.236893       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:49:59.344859       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"108.097116ms\"\nI1002 13:49:59.362810       1 service.go:306] Service provisioning-4135-5848/csi-hostpathplugin updated: 0 ports\nI1002 13:50:00.345054       1 service.go:446] Removing service port \"provisioning-4135-5848/csi-hostpathplugin:dummy\"\nI1002 13:50:00.345252       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:00.419845       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.804987ms\"\nI1002 13:50:06.111523       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:06.231881       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"120.511163ms\"\nI1002 13:50:06.232056       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:06.320432       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.50429ms\"\nI1002 13:50:07.118473       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:07.189110       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.771463ms\"\nI1002 13:50:10.370671       1 service.go:306] Service services-6124/affinity-clusterip-timeout updated: 0 ports\nI1002 13:50:10.370737       1 service.go:446] Removing service port \"services-6124/affinity-clusterip-timeout\"\nI1002 13:50:10.370947       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:10.561291       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"190.562828ms\"\nI1002 13:50:10.561775       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:10.641620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.280689ms\"\nI1002 13:50:14.671847       1 service.go:306] Service volume-966-8579/csi-hostpathplugin updated: 0 ports\nI1002 13:50:14.671894       1 service.go:446] Removing service port \"volume-966-8579/csi-hostpathplugin:dummy\"\nI1002 13:50:14.672019       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:14.780667       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"108.757912ms\"\nI1002 13:50:14.780883       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:14.855014       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.295123ms\"\nI1002 13:50:22.581686       1 service.go:306] Service services-2177/affinity-clusterip updated: 0 ports\nI1002 13:50:22.581732       1 service.go:446] Removing service port \"services-2177/affinity-clusterip\"\nI1002 13:50:22.581962       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:22.628873       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.114288ms\"\nI1002 13:50:22.629112       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:22.682800       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.877066ms\"\nI1002 13:50:25.590551       1 service.go:306] Service volume-expand-8907-1786/csi-hostpathplugin updated: 1 ports\nI1002 13:50:25.590621       1 service.go:421] Adding new service port \"volume-expand-8907-1786/csi-hostpathplugin:dummy\" at 100.71.51.114:12345/TCP\nI1002 13:50:25.590987       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:25.669242       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.628025ms\"\nI1002 13:50:25.669400       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:25.752939       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.644883ms\"\nI1002 13:50:27.795785       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:27.890516       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"94.817582ms\"\nI1002 13:50:34.275276       1 service.go:306] Service volume-expand-5763-2594/csi-hostpathplugin updated: 0 ports\nI1002 13:50:34.275320       1 service.go:446] Removing service port \"volume-expand-5763-2594/csi-hostpathplugin:dummy\"\nI1002 13:50:34.275491       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:34.338719       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.383524ms\"\nI1002 13:50:34.338998       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:34.396724       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.944302ms\"\nI1002 13:50:38.491998       1 service.go:306] Service services-6299/nodeport-update-service updated: 0 ports\nI1002 13:50:38.492038       1 service.go:446] Removing service port \"services-6299/nodeport-update-service:tcp-port\"\nI1002 13:50:38.492054       1 service.go:446] Removing service port \"services-6299/nodeport-update-service:udp-port\"\nI1002 13:50:38.492179       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:38.644962       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"152.907582ms\"\nI1002 13:50:38.645153       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:38.713703       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.694593ms\"\nI1002 13:50:47.143738       1 service.go:306] Service services-7834/multi-endpoint-test updated: 2 ports\nI1002 13:50:47.143819       1 service.go:421] Adding new service port \"services-7834/multi-endpoint-test:portname1\" at 100.69.196.101:80/TCP\nI1002 13:50:47.143836       1 service.go:421] Adding new service port \"services-7834/multi-endpoint-test:portname2\" at 100.69.196.101:81/TCP\nI1002 13:50:47.144019       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:47.210852       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.03162ms\"\nI1002 13:50:47.211102       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:47.262029       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.127555ms\"\nI1002 13:50:47.886480       1 service.go:306] Service services-4405/externalip-test updated: 1 ports\nI1002 13:50:48.263134       1 service.go:421] Adding new service port \"services-4405/externalip-test:http\" at 100.70.248.198:80/TCP\nI1002 13:50:48.263308       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:48.361978       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"98.872716ms\"\nI1002 13:50:49.362543       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:49.418971       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.546655ms\"\nI1002 13:50:51.453738       1 service.go:306] Service ephemeral-8517-6530/csi-hostpathplugin updated: 0 ports\nI1002 13:50:51.453782       1 service.go:446] Removing service port \"ephemeral-8517-6530/csi-hostpathplugin:dummy\"\nI1002 13:50:51.453907       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:51.620820       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"167.025176ms\"\nI1002 13:50:51.620983       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:51.785513       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"164.642766ms\"\nI1002 13:50:53.086718       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:53.173564       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"87.001436ms\"\nI1002 13:50:57.036820       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:57.140579       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"103.888798ms\"\nI1002 13:50:57.284284       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:57.369763       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"85.620943ms\"\nI1002 13:50:59.155207       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:50:59.335124       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"180.06196ms\"\nI1002 13:51:00.112628       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:00.377204       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"264.677559ms\"\nI1002 13:51:00.866276       1 service.go:306] Service services-7834/multi-endpoint-test updated: 0 ports\nI1002 13:51:00.866330       1 service.go:446] Removing service port \"services-7834/multi-endpoint-test:portname1\"\nI1002 13:51:00.866347       1 service.go:446] Removing service port \"services-7834/multi-endpoint-test:portname2\"\nI1002 13:51:00.866607       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:00.916569       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.224149ms\"\nI1002 13:51:01.916936       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:02.024448       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"107.589173ms\"\nI1002 13:51:11.561486       1 service.go:306] Service aggregator-2992/sample-api updated: 1 ports\nI1002 13:51:11.561533       1 service.go:421] Adding new service port \"aggregator-2992/sample-api\" at 100.66.151.192:7443/TCP\nI1002 13:51:11.561656       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:11.614463       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.924185ms\"\nI1002 13:51:11.614658       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:11.682197       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.679853ms\"\nI1002 13:51:22.907355       1 service.go:306] Service services-4405/externalip-test updated: 0 ports\nI1002 13:51:22.907562       1 service.go:446] Removing service port \"services-4405/externalip-test:http\"\nI1002 13:51:22.908482       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:22.966827       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.412326ms\"\nI1002 13:51:22.967128       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:23.027419       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.539467ms\"\nI1002 13:51:25.121379       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:25.283154       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"161.860882ms\"\nW1002 13:51:25.948538       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1002 13:51:33.045528       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:33.138864       1 service.go:306] Service webhook-8934/e2e-test-webhook updated: 1 ports\nI1002 13:51:33.192369       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"146.92488ms\"\nI1002 13:51:33.192415       1 service.go:421] Adding new service port \"webhook-8934/e2e-test-webhook\" at 100.69.92.67:8443/TCP\nI1002 13:51:33.192700       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:33.376175       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"183.758944ms\"\nI1002 13:51:33.400127       1 service.go:306] Service aggregator-2992/sample-api updated: 0 ports\nI1002 13:51:34.377732       1 service.go:446] Removing service port \"aggregator-2992/sample-api\"\nI1002 13:51:34.377901       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:34.443603       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.880907ms\"\nI1002 13:51:34.759481       1 service.go:306] Service webhook-694/e2e-test-webhook updated: 1 ports\nI1002 13:51:35.444171       1 service.go:421] Adding new service port \"webhook-694/e2e-test-webhook\" at 100.70.253.102:8443/TCP\nI1002 13:51:35.444319       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:35.513553       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.393894ms\"\nI1002 13:51:39.653092       1 service.go:306] Service webhook-3798/e2e-test-webhook updated: 1 ports\nI1002 13:51:39.653192       1 service.go:421] Adding new service port \"webhook-3798/e2e-test-webhook\" at 100.65.169.129:8443/TCP\nI1002 13:51:39.653334       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:39.716716       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.518908ms\"\nI1002 13:51:39.716874       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:39.769977       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.214907ms\"\nI1002 13:51:41.183452       1 service.go:306] Service webhook-694/e2e-test-webhook updated: 0 ports\nI1002 13:51:41.183499       1 service.go:446] Removing service port \"webhook-694/e2e-test-webhook\"\nI1002 13:51:41.183633       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:41.238813       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.301422ms\"\nI1002 13:51:42.240102       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:42.336547       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"96.596155ms\"\nI1002 13:51:44.849040       1 service.go:306] Service webhook-3798/e2e-test-webhook updated: 0 ports\nI1002 13:51:44.849084       1 service.go:446] Removing service port \"webhook-3798/e2e-test-webhook\"\nI1002 13:51:44.849212       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:45.020464       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"171.364734ms\"\nI1002 13:51:45.020606       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:45.085412       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.894978ms\"\nI1002 13:51:47.869585       1 service.go:306] Service webhook-8934/e2e-test-webhook updated: 0 ports\nI1002 13:51:47.869622       1 service.go:446] Removing service port \"webhook-8934/e2e-test-webhook\"\nI1002 13:51:47.869759       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:47.931107       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.467606ms\"\nI1002 13:51:47.931279       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:47.990115       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.95853ms\"\nI1002 13:51:54.580695       1 service.go:306] Service volumemode-9655-7213/csi-hostpathplugin updated: 1 ports\nI1002 13:51:54.580754       1 service.go:421] Adding new service port \"volumemode-9655-7213/csi-hostpathplugin:dummy\" at 100.71.46.134:12345/TCP\nI1002 13:51:54.580905       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:54.663417       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.658601ms\"\nI1002 13:51:54.663572       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:54.773083       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"109.620228ms\"\nI1002 13:51:56.377416       1 service.go:306] Service ephemeral-1761-3004/csi-hostpathplugin updated: 0 ports\nI1002 13:51:56.377459       1 service.go:446] Removing service port \"ephemeral-1761-3004/csi-hostpathplugin:dummy\"\nI1002 13:51:56.377605       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:56.482601       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"105.124464ms\"\nI1002 13:51:57.483510       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:57.596289       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"112.917637ms\"\nI1002 13:51:59.333989       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:51:59.382651       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.781436ms\"\nI1002 13:52:09.148581       1 service.go:306] Service crd-webhook-1521/e2e-test-crd-conversion-webhook updated: 1 ports\nI1002 13:52:09.148632       1 service.go:421] Adding new service port \"crd-webhook-1521/e2e-test-crd-conversion-webhook\" at 100.65.62.242:9443/TCP\nI1002 13:52:09.148759       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:09.341100       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"192.44893ms\"\nI1002 13:52:09.341371       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:09.410918       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.763662ms\"\nI1002 13:52:15.744947       1 service.go:306] Service crd-webhook-1521/e2e-test-crd-conversion-webhook updated: 0 ports\nI1002 13:52:15.744998       1 service.go:446] Removing service port \"crd-webhook-1521/e2e-test-crd-conversion-webhook\"\nI1002 13:52:15.745119       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:15.827620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.53755ms\"\nI1002 13:52:15.840007       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:15.917144       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.214942ms\"\nI1002 13:52:18.167224       1 service.go:306] Service proxy-9917/proxy-service-9l7xw updated: 4 ports\nI1002 13:52:18.167436       1 service.go:421] Adding new service port \"proxy-9917/proxy-service-9l7xw:portname1\" at 100.67.107.168:80/TCP\nI1002 13:52:18.167462       1 service.go:421] Adding new service port \"proxy-9917/proxy-service-9l7xw:portname2\" at 100.67.107.168:81/TCP\nI1002 13:52:18.167474       1 service.go:421] Adding new service port \"proxy-9917/proxy-service-9l7xw:tlsportname1\" at 100.67.107.168:443/TCP\nI1002 13:52:18.167486       1 service.go:421] Adding new service port \"proxy-9917/proxy-service-9l7xw:tlsportname2\" at 100.67.107.168:444/TCP\nI1002 13:52:18.167670       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:18.217974       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.695846ms\"\nI1002 13:52:18.218333       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:18.271867       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.839572ms\"\nI1002 13:52:21.873913       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:21.957130       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.464475ms\"\nI1002 13:52:22.675039       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:22.705512       1 service.go:306] Service volume-expand-8907-1786/csi-hostpathplugin updated: 0 ports\nI1002 13:52:22.836344       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"161.462332ms\"\nI1002 13:52:23.837296       1 service.go:446] Removing service port \"volume-expand-8907-1786/csi-hostpathplugin:dummy\"\nI1002 13:52:23.837598       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:23.901622       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.326641ms\"\nI1002 13:52:28.656499       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:28.784663       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"128.287021ms\"\nI1002 13:52:37.349019       1 service.go:306] Service volumemode-9655-7213/csi-hostpathplugin updated: 0 ports\nI1002 13:52:37.349063       1 service.go:446] Removing service port \"volumemode-9655-7213/csi-hostpathplugin:dummy\"\nI1002 13:52:37.349192       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:37.408852       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.775242ms\"\nI1002 13:52:37.409244       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:37.469117       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.215605ms\"\nI1002 13:52:44.433234       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:44.493020       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.866089ms\"\nI1002 13:52:44.590862       1 service.go:306] Service proxy-9917/proxy-service-9l7xw updated: 0 ports\nI1002 13:52:44.590906       1 service.go:446] Removing service port \"proxy-9917/proxy-service-9l7xw:portname1\"\nI1002 13:52:44.590950       1 service.go:446] Removing service port \"proxy-9917/proxy-service-9l7xw:portname2\"\nI1002 13:52:44.590959       1 service.go:446] Removing service port \"proxy-9917/proxy-service-9l7xw:tlsportname1\"\nI1002 13:52:44.590966       1 service.go:446] Removing service port \"proxy-9917/proxy-service-9l7xw:tlsportname2\"\nI1002 13:52:44.591141       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:44.648313       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.393989ms\"\nI1002 13:52:46.017763       1 service.go:306] Service dns-7654/test-service-2 updated: 1 ports\nI1002 13:52:46.017814       1 service.go:421] Adding new service port \"dns-7654/test-service-2:http\" at 100.66.133.86:80/TCP\nI1002 13:52:46.017982       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:46.072454       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.617293ms\"\nI1002 13:52:47.072969       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:47.163980       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"91.138387ms\"\nI1002 13:52:55.008497       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:52:55.139191       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"130.772853ms\"\nI1002 13:53:07.340409       1 service.go:306] Service services-6734/service-proxy-toggled updated: 1 ports\nI1002 13:53:07.340486       1 service.go:421] Adding new service port \"services-6734/service-proxy-toggled\" at 100.71.103.83:80/TCP\nI1002 13:53:07.340614       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:07.519249       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"178.770255ms\"\nI1002 13:53:07.519394       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:07.607201       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"87.899174ms\"\nI1002 13:53:08.843194       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:08.948474       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"105.42366ms\"\nI1002 13:53:09.948792       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:10.104816       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"156.159709ms\"\nI1002 13:53:11.948899       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:12.029847       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"81.076859ms\"\nI1002 13:53:16.324551       1 service.go:306] Service provisioning-3680-5610/csi-hostpathplugin updated: 1 ports\nI1002 13:53:16.324951       1 service.go:421] Adding new service port \"provisioning-3680-5610/csi-hostpathplugin:dummy\" at 100.67.78.230:12345/TCP\nI1002 13:53:16.325136       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:16.386976       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.02818ms\"\nI1002 13:53:16.387145       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:16.443171       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.146138ms\"\nI1002 13:53:24.201244       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:24.378793       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"177.623519ms\"\nI1002 13:53:32.699271       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:32.864285       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"165.088519ms\"\nI1002 13:53:32.880442       1 service.go:306] Service dns-7654/test-service-2 updated: 0 ports\nI1002 13:53:32.880488       1 service.go:446] Removing service port \"dns-7654/test-service-2:http\"\nI1002 13:53:32.880682       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:32.931281       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.778314ms\"\nI1002 13:53:33.932416       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:33.984886       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.583852ms\"\nI1002 13:53:55.441791       1 service.go:306] Service services-6734/service-proxy-toggled updated: 0 ports\nI1002 13:53:55.441842       1 service.go:446] Removing service port \"services-6734/service-proxy-toggled\"\nI1002 13:53:55.442028       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:55.595634       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"153.769967ms\"\nI1002 13:53:55.595817       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:53:55.702831       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"107.144459ms\"\nI1002 13:54:06.588275       1 service.go:306] Service services-6734/service-proxy-toggled updated: 1 ports\nI1002 13:54:06.588417       1 service.go:421] Adding new service port \"services-6734/service-proxy-toggled\" at 100.71.103.83:80/TCP\nI1002 13:54:06.588594       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:06.673348       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.923648ms\"\nI1002 13:54:06.673526       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:06.766958       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"93.458639ms\"\nI1002 13:54:11.366554       1 service.go:306] Service volume-expand-9027-2352/csi-hostpathplugin updated: 1 ports\nI1002 13:54:11.366607       1 service.go:421] Adding new service port \"volume-expand-9027-2352/csi-hostpathplugin:dummy\" at 100.67.43.64:12345/TCP\nI1002 13:54:11.366738       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:11.455203       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.581651ms\"\nI1002 13:54:11.455356       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:11.518030       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.76086ms\"\nI1002 13:54:16.679197       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:16.783838       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"104.77271ms\"\nI1002 13:54:28.945507       1 service.go:306] Service webhook-5920/e2e-test-webhook updated: 1 ports\nI1002 13:54:28.945568       1 service.go:421] Adding new service port \"webhook-5920/e2e-test-webhook\" at 100.66.193.95:8443/TCP\nI1002 13:54:28.945937       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:29.058707       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"113.124092ms\"\nI1002 13:54:29.059013       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:29.273202       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"214.33238ms\"\nI1002 13:54:32.059110       1 service.go:306] Service webhook-5920/e2e-test-webhook updated: 0 ports\nI1002 13:54:32.059204       1 service.go:446] Removing service port \"webhook-5920/e2e-test-webhook\"\nI1002 13:54:32.059342       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:32.147876       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.658134ms\"\nI1002 13:54:32.148390       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:32.264345       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"116.419109ms\"\nI1002 13:54:33.676533       1 service.go:306] Service deployment-8344/test-rolling-update-with-lb updated: 1 ports\nI1002 13:54:33.676594       1 service.go:421] Adding new service port \"deployment-8344/test-rolling-update-with-lb\" at 100.67.112.190:80/TCP\nI1002 13:54:33.676853       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:33.721854       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for deployment-8344/test-rolling-update-with-lb\\\" (:31624/tcp4)\"\nI1002 13:54:33.728125       1 service_health.go:98] Opening healthcheck \"deployment-8344/test-rolling-update-with-lb\" on port 31003\nI1002 13:54:33.728220       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.626991ms\"\nI1002 13:54:34.728622       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:34.802487       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.043925ms\"\nI1002 13:54:37.694984       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:37.750255       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.339311ms\"\nI1002 13:54:37.750495       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:37.802461       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.144856ms\"\nI1002 13:54:38.100713       1 service.go:306] Service services-6734/service-proxy-toggled updated: 0 ports\nI1002 13:54:38.803089       1 service.go:446] Removing service port \"services-6734/service-proxy-toggled\"\nI1002 13:54:38.803380       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:38.861234       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.145164ms\"\nI1002 13:54:43.411769       1 service.go:306] Service ephemeral-7870-6766/csi-hostpathplugin updated: 1 ports\nI1002 13:54:43.411821       1 service.go:421] Adding new service port \"ephemeral-7870-6766/csi-hostpathplugin:dummy\" at 100.70.114.187:12345/TCP\nI1002 13:54:43.412038       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:43.477955       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.130014ms\"\nI1002 13:54:43.478196       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:43.560558       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.553302ms\"\nI1002 13:54:45.221445       1 service.go:306] Service provisioning-3680-5610/csi-hostpathplugin updated: 0 ports\nI1002 13:54:45.221491       1 service.go:446] Removing service port \"provisioning-3680-5610/csi-hostpathplugin:dummy\"\nI1002 13:54:45.221592       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:45.305854       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.347182ms\"\nI1002 13:54:46.306771       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:46.375348       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.608025ms\"\nI1002 13:54:48.399089       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:48.467703       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.73091ms\"\nI1002 13:54:50.266706       1 service.go:306] Service services-8937/affinity-nodeport-timeout updated: 1 ports\nI1002 13:54:50.266751       1 service.go:421] Adding new service port \"services-8937/affinity-nodeport-timeout\" at 100.65.83.192:80/TCP\nI1002 13:54:50.266882       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:50.321184       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-8937/affinity-nodeport-timeout\\\" (:30607/tcp4)\"\nI1002 13:54:50.332094       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.330891ms\"\nI1002 13:54:50.332313       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:50.434671       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"102.533372ms\"\nI1002 13:54:51.446649       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:51.536546       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"90.018589ms\"\nI1002 13:54:52.407908       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:52.461014       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.217961ms\"\nI1002 13:54:59.021988       1 service.go:306] Service provisioning-2229-8969/csi-hostpathplugin updated: 1 ports\nI1002 13:54:59.022040       1 service.go:421] Adding new service port \"provisioning-2229-8969/csi-hostpathplugin:dummy\" at 100.64.58.52:12345/TCP\nI1002 13:54:59.022286       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:59.080733       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.684446ms\"\nI1002 13:54:59.080976       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:54:59.151151       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.371599ms\"\nI1002 13:55:00.151972       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:00.296704       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"144.884006ms\"\nI1002 13:55:01.314651       1 service.go:306] Service volume-expand-9027-2352/csi-hostpathplugin updated: 0 ports\nI1002 13:55:01.314933       1 service.go:446] Removing service port \"volume-expand-9027-2352/csi-hostpathplugin:dummy\"\nI1002 13:55:01.315250       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:01.380863       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.923109ms\"\nI1002 13:55:02.381183       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:02.543626       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"162.581982ms\"\nI1002 13:55:06.051736       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:06.106367       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.720463ms\"\nI1002 13:55:17.798627       1 service.go:306] Service webhook-322/e2e-test-webhook updated: 1 ports\nI1002 13:55:17.798686       1 service.go:421] Adding new service port \"webhook-322/e2e-test-webhook\" at 100.66.128.61:8443/TCP\nI1002 13:55:17.798825       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:17.884775       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.083778ms\"\nI1002 13:55:17.885021       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:17.973683       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.855782ms\"\nI1002 13:55:18.331772       1 service.go:306] Service ephemeral-2941-236/csi-hostpathplugin updated: 1 ports\nI1002 13:55:18.890821       1 service.go:306] Service volume-293-8506/csi-hostpathplugin updated: 1 ports\nI1002 13:55:18.890872       1 service.go:421] Adding new service port \"ephemeral-2941-236/csi-hostpathplugin:dummy\" at 100.65.9.48:12345/TCP\nI1002 13:55:18.890894       1 service.go:421] Adding new service port \"volume-293-8506/csi-hostpathplugin:dummy\" at 100.71.40.185:12345/TCP\nI1002 13:55:18.891100       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:18.952677       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.791898ms\"\nI1002 13:55:19.952953       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:20.016028       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.202182ms\"\nI1002 13:55:20.351397       1 service.go:306] Service webhook-322/e2e-test-webhook updated: 0 ports\nI1002 13:55:21.017098       1 service.go:446] Removing service port \"webhook-322/e2e-test-webhook\"\nI1002 13:55:21.017478       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:21.071859       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.764291ms\"\nI1002 13:55:26.757521       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:26.844333       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.894223ms\"\nI1002 13:55:27.757934       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:27.821183       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.387291ms\"\nI1002 13:55:38.386716       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:38.472095       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"85.577246ms\"\nI1002 13:55:38.472362       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:38.537959       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.815149ms\"\nI1002 13:55:49.928152       1 service.go:306] Service provisioning-2229-8969/csi-hostpathplugin updated: 0 ports\nI1002 13:55:49.928198       1 service.go:446] Removing service port \"provisioning-2229-8969/csi-hostpathplugin:dummy\"\nI1002 13:55:49.928340       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:49.991823       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.59449ms\"\nI1002 13:55:49.992044       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:50.048102       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.232829ms\"\nI1002 13:55:52.561994       1 service.go:306] Service services-8937/affinity-nodeport-timeout updated: 0 ports\nI1002 13:55:52.562033       1 service.go:446] Removing service port \"services-8937/affinity-nodeport-timeout\"\nI1002 13:55:52.562168       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:52.630864       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.816518ms\"\nI1002 13:55:52.631039       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:55:52.699291       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.375584ms\"\nI1002 13:56:03.687853       1 service.go:306] Service services-1713/nodeport-service updated: 1 ports\nI1002 13:56:03.687930       1 service.go:421] Adding new service port \"services-1713/nodeport-service\" at 100.65.85.167:80/TCP\nI1002 13:56:03.688093       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:03.881877       1 service.go:306] Service services-1713/externalsvc updated: 1 ports\nI1002 13:56:03.898405       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-1713/nodeport-service\\\" (:32580/tcp4)\"\nI1002 13:56:03.926282       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"238.361469ms\"\nI1002 13:56:03.926394       1 service.go:421] Adding new service port \"services-1713/externalsvc\" at 100.69.89.184:80/TCP\nI1002 13:56:03.926711       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:04.057248       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"130.851032ms\"\nI1002 13:56:04.993090       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:05.043146       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.230471ms\"\nI1002 13:56:10.177900       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:10.232052       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.273355ms\"\nI1002 13:56:10.842654       1 service.go:306] Service services-1713/nodeport-service updated: 0 ports\nI1002 13:56:10.842696       1 service.go:446] Removing service port \"services-1713/nodeport-service\"\nI1002 13:56:10.842901       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:10.910185       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.468843ms\"\nI1002 13:56:11.168114       1 service.go:306] Service provisioning-9686-8726/csi-hostpathplugin updated: 1 ports\nI1002 13:56:11.910293       1 service.go:421] Adding new service port \"provisioning-9686-8726/csi-hostpathplugin:dummy\" at 100.71.61.251:12345/TCP\nI1002 13:56:11.910460       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:11.964631       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.351687ms\"\nI1002 13:56:15.666458       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:15.742495       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.11762ms\"\nI1002 13:56:16.259984       1 service.go:306] Service ephemeral-7870-6766/csi-hostpathplugin updated: 0 ports\nI1002 13:56:16.260025       1 service.go:446] Removing service port \"ephemeral-7870-6766/csi-hostpathplugin:dummy\"\nI1002 13:56:16.260133       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:16.310724       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.685575ms\"\nI1002 13:56:17.311456       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:17.364736       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.378254ms\"\nI1002 13:56:18.022049       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:18.100893       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.975641ms\"\nI1002 13:56:19.101447       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:19.175540       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.248848ms\"\nI1002 13:56:30.386745       1 service.go:306] Service services-1713/externalsvc updated: 0 ports\nI1002 13:56:30.386961       1 service.go:446] Removing service port \"services-1713/externalsvc\"\nI1002 13:56:30.387232       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:30.438757       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.786353ms\"\nI1002 13:56:30.450842       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:30.601836       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"151.135509ms\"\nI1002 13:56:30.954807       1 service.go:306] Service volume-293-8506/csi-hostpathplugin updated: 0 ports\nI1002 13:56:31.602008       1 service.go:446] Removing service port \"volume-293-8506/csi-hostpathplugin:dummy\"\nI1002 13:56:31.602344       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:31.680059       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.006871ms\"\nI1002 13:56:37.036398       1 service.go:306] Service crd-webhook-4973/e2e-test-crd-conversion-webhook updated: 1 ports\nI1002 13:56:37.036482       1 service.go:421] Adding new service port \"crd-webhook-4973/e2e-test-crd-conversion-webhook\" at 100.69.17.243:9443/TCP\nI1002 13:56:37.036616       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:37.137306       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"100.802535ms\"\nI1002 13:56:37.137482       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:37.257334       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"119.982748ms\"\nI1002 13:56:40.892686       1 service.go:306] Service apply-4965/test-svc updated: 1 ports\nI1002 13:56:40.892833       1 service.go:421] Adding new service port \"apply-4965/test-svc\" at 100.66.243.207:8080/UDP\nI1002 13:56:40.892959       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:40.953181       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.323494ms\"\nI1002 13:56:42.905958       1 service.go:306] Service crd-webhook-4973/e2e-test-crd-conversion-webhook updated: 0 ports\nI1002 13:56:42.906004       1 service.go:446] Removing service port \"crd-webhook-4973/e2e-test-crd-conversion-webhook\"\nI1002 13:56:42.906174       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:43.048457       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"142.440528ms\"\nI1002 13:56:43.048757       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:43.175644       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"127.05309ms\"\nI1002 13:56:46.710522       1 service.go:306] Service apply-4965/test-svc updated: 0 ports\nI1002 13:56:46.710569       1 service.go:446] Removing service port \"apply-4965/test-svc\"\nI1002 13:56:46.710706       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:56:46.777559       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.980055ms\"\nI1002 13:57:07.176458       1 service.go:306] Service ephemeral-2941-236/csi-hostpathplugin updated: 0 ports\nI1002 13:57:07.176503       1 service.go:446] Removing service port \"ephemeral-2941-236/csi-hostpathplugin:dummy\"\nI1002 13:57:07.176655       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:57:07.240750       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.234286ms\"\nI1002 13:57:07.240884       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:57:07.514296       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"273.496681ms\"\nI1002 13:57:09.018076       1 service.go:306] Service webhook-5874/e2e-test-webhook updated: 1 ports\nI1002 13:57:09.018122       1 service.go:421] Adding new service port \"webhook-5874/e2e-test-webhook\" at 100.69.135.156:8443/TCP\nI1002 13:57:09.018399       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:57:09.075288       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.15685ms\"\nI1002 13:57:09.093666       1 service.go:306] Service provisioning-9686-8726/csi-hostpathplugin updated: 0 ports\nI1002 13:57:10.075441       1 service.go:446] Removing service port \"provisioning-9686-8726/csi-hostpathplugin:dummy\"\nI1002 13:57:10.075774       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:57:10.131275       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.835442ms\"\nE1002 13:57:13.023869       1 utils.go:282] Skipping invalid IP: \nI1002 13:57:16.021760       1 service.go:306] Service webhook-5874/e2e-test-webhook updated: 0 ports\nI1002 13:57:16.021805       1 service.go:446] Removing service port \"webhook-5874/e2e-test-webhook\"\nI1002 13:57:16.021941       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:57:16.089094       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.277433ms\"\nI1002 13:57:16.092161       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:57:16.162468       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.554944ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-33-188.ap-southeast-2.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-37-133.ap-southeast-2.compute.internal ====\nI1002 13:34:11.033583       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI1002 13:34:11.033750       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI1002 13:34:11.033760       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI1002 13:34:11.033767       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI1002 13:34:11.033774       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI1002 13:34:11.033779       1 flags.go:59] FLAG: --cleanup=\"false\"\nI1002 13:34:11.033784       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI1002 13:34:11.033790       1 flags.go:59] FLAG: --config=\"\"\nI1002 13:34:11.033795       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI1002 13:34:11.033813       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI1002 13:34:11.033822       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI1002 13:34:11.033827       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI1002 13:34:11.033836       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI1002 13:34:11.033841       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI1002 13:34:11.038142       1 flags.go:59] FLAG: --feature-gates=\"\"\nI1002 13:34:11.038150       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI1002 13:34:11.038156       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI1002 13:34:11.038162       1 flags.go:59] FLAG: --help=\"false\"\nI1002 13:34:11.038167       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-37-133.ap-southeast-2.compute.internal\"\nI1002 13:34:11.038172       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI1002 13:34:11.038175       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI1002 13:34:11.038180       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI1002 13:34:11.038183       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI1002 13:34:11.038200       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI1002 13:34:11.038204       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI1002 13:34:11.038207       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI1002 13:34:11.038211       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI1002 13:34:11.038215       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI1002 13:34:11.038218       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI1002 13:34:11.038226       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI1002 13:34:11.038230       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI1002 13:34:11.038233       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI1002 13:34:11.038238       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI1002 13:34:11.038243       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI1002 13:34:11.038247       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI1002 13:34:11.038254       1 flags.go:59] FLAG: --log-dir=\"\"\nI1002 13:34:11.038258       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI1002 13:34:11.038262       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI1002 13:34:11.038266       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI1002 13:34:11.038269       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI1002 13:34:11.038273       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI1002 13:34:11.038278       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI1002 13:34:11.038282       1 flags.go:59] FLAG: --master=\"https://127.0.0.1\"\nI1002 13:34:11.038285       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI1002 13:34:11.038289       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI1002 13:34:11.038293       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI1002 13:34:11.038297       1 flags.go:59] FLAG: --one-output=\"false\"\nI1002 13:34:11.038301       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI1002 13:34:11.038305       1 flags.go:59] FLAG: --profiling=\"false\"\nI1002 13:34:11.038308       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI1002 13:34:11.038313       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI1002 13:34:11.038317       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI1002 13:34:11.038321       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI1002 13:34:11.038327       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI1002 13:34:11.038330       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI1002 13:34:11.038334       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI1002 13:34:11.038338       1 flags.go:59] FLAG: --v=\"2\"\nI1002 13:34:11.038342       1 flags.go:59] FLAG: --version=\"false\"\nI1002 13:34:11.038348       1 flags.go:59] FLAG: --vmodule=\"\"\nI1002 13:34:11.038352       1 flags.go:59] FLAG: --write-config-to=\"\"\nW1002 13:34:11.038358       1 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI1002 13:34:11.047146       1 feature_gate.go:243] feature gates: &{map[]}\nI1002 13:34:11.047252       1 feature_gate.go:243] feature gates: &{map[]}\nE1002 13:34:11.076806       1 node.go:161] Failed to retrieve node info: Get \"https://127.0.0.1/api/v1/nodes/ip-172-20-37-133.ap-southeast-2.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused\nE1002 13:34:22.200867       1 node.go:161] Failed to retrieve node info: Get \"https://127.0.0.1/api/v1/nodes/ip-172-20-37-133.ap-southeast-2.compute.internal\": net/http: TLS handshake timeout\nE1002 13:34:43.037582       1 node.go:161] Failed to retrieve node info: nodes \"ip-172-20-37-133.ap-southeast-2.compute.internal\" is forbidden: User \"system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope\nI1002 13:34:47.050536       1 node.go:172] Successfully retrieved node IP: 172.20.37.133\nI1002 13:34:47.050566       1 server_others.go:140] Detected node IP 172.20.37.133\nW1002 13:34:47.050596       1 server_others.go:598] Unknown proxy mode \"\", assuming iptables proxy\nI1002 13:34:47.050677       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI1002 13:34:47.081824       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI1002 13:34:47.081856       1 server_others.go:212] Using iptables Proxier.\nI1002 13:34:47.081868       1 server_others.go:219] creating dualStackProxier for iptables.\nW1002 13:34:47.081879       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI1002 13:34:47.081947       1 utils.go:375] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI1002 13:34:47.082006       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI1002 13:34:47.082039       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1002 13:34:47.082074       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv4\nI1002 13:34:47.082184       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI1002 13:34:47.082280       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1002 13:34:47.082300       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv6\nI1002 13:34:47.082459       1 server.go:643] Version: v1.21.5\nI1002 13:34:47.083770       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI1002 13:34:47.083808       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1002 13:34:47.083891       1 mount_linux.go:197] Detected OS without systemd\nI1002 13:34:47.084057       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI1002 13:34:47.102249       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1002 13:34:47.102303       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1002 13:34:47.102435       1 config.go:315] Starting service config controller\nI1002 13:34:47.102446       1 shared_informer.go:240] Waiting for caches to sync for service config\nI1002 13:34:47.102467       1 config.go:224] Starting endpoint slice config controller\nI1002 13:34:47.102471       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nW1002 13:34:47.108574       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1002 13:34:47.108964       1 service.go:306] Service default/kubernetes updated: 1 ports\nW1002 13:34:47.109906       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1002 13:34:47.203198       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI1002 13:34:47.203198       1 shared_informer.go:247] Caches are synced for service config \nI1002 13:34:47.203384       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1002 13:34:47.203476       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1002 13:34:47.203538       1 service.go:421] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI1002 13:34:47.203613       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:34:47.244844       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.316ms\"\nI1002 13:34:47.244971       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:34:47.272916       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"28.037343ms\"\nI1002 13:34:50.855658       1 service.go:306] Service kube-system/kube-dns updated: 3 ports\nI1002 13:34:50.855718       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI1002 13:34:50.855731       1 service.go:421] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI1002 13:34:50.855740       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI1002 13:34:50.855763       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:34:50.930043       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.320915ms\"\nI1002 13:35:08.933815       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:35:08.982513       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.719371ms\"\nI1002 13:37:15.850585       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI1002 13:37:15.850802       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:37:15.906480       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.211091ms\"\nI1002 13:37:15.906721       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:37:15.934484       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"27.819801ms\"\nI1002 13:37:17.739813       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:37:17.789453       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.689623ms\"\nI1002 13:37:18.789737       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:37:18.865561       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.896774ms\"\nI1002 13:40:02.434831       1 service.go:306] Service services-8289/tolerate-unready updated: 1 ports\nI1002 13:40:02.435560       1 service.go:421] Adding new service port \"services-8289/tolerate-unready:http\" at 100.70.94.78:80/TCP\nI1002 13:40:02.435665       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:02.499238       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.340147ms\"\nI1002 13:40:02.499357       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:02.546866       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.529254ms\"\nI1002 13:40:03.667621       1 service.go:306] Service services-2368/nodeport-collision-1 updated: 1 ports\nI1002 13:40:03.667802       1 service.go:421] Adding new service port \"services-2368/nodeport-collision-1\" at 100.68.222.32:80/TCP\nI1002 13:40:03.667931       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:03.701337       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-2368/nodeport-collision-1\\\" (:32592/tcp4)\"\nI1002 13:40:03.705189       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.391853ms\"\nI1002 13:40:04.052838       1 service.go:306] Service services-2368/nodeport-collision-1 updated: 0 ports\nI1002 13:40:04.259461       1 service.go:306] Service services-2368/nodeport-collision-2 updated: 1 ports\nI1002 13:40:04.454330       1 service.go:446] Removing service port \"services-2368/nodeport-collision-1\"\nI1002 13:40:04.454573       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:04.504846       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.513526ms\"\nI1002 13:40:05.505807       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:05.576775       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.98744ms\"\nI1002 13:40:08.860100       1 service.go:306] Service ephemeral-2177-4926/csi-hostpathplugin updated: 1 ports\nI1002 13:40:08.860331       1 service.go:421] Adding new service port \"ephemeral-2177-4926/csi-hostpathplugin:dummy\" at 100.68.190.212:12345/TCP\nI1002 13:40:08.860460       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:08.901807       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.498124ms\"\nI1002 13:40:08.901942       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:08.948792       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.936336ms\"\nI1002 13:40:09.542958       1 service.go:306] Service webhook-6380/e2e-test-webhook updated: 1 ports\nI1002 13:40:09.949728       1 service.go:421] Adding new service port \"webhook-6380/e2e-test-webhook\" at 100.71.239.80:8443/TCP\nI1002 13:40:09.949956       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:10.004317       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.606155ms\"\nI1002 13:40:11.781736       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:11.836955       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.265239ms\"\nI1002 13:40:14.299896       1 service.go:306] Service webhook-4076/e2e-test-webhook updated: 1 ports\nI1002 13:40:14.299937       1 service.go:421] Adding new service port \"webhook-4076/e2e-test-webhook\" at 100.71.156.226:8443/TCP\nI1002 13:40:14.300161       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:14.339481       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.541708ms\"\nI1002 13:40:14.339632       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:14.379673       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.156488ms\"\nI1002 13:40:15.752939       1 service.go:306] Service webhook-6380/e2e-test-webhook updated: 0 ports\nI1002 13:40:15.753134       1 service.go:446] Removing service port \"webhook-6380/e2e-test-webhook\"\nI1002 13:40:15.753230       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:15.792021       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.881695ms\"\nI1002 13:40:16.792749       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:16.829103       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.388855ms\"\nI1002 13:40:19.577127       1 service.go:306] Service webhook-4076/e2e-test-webhook updated: 0 ports\nI1002 13:40:19.577306       1 service.go:446] Removing service port \"webhook-4076/e2e-test-webhook\"\nI1002 13:40:19.577465       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:19.630156       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.844073ms\"\nI1002 13:40:19.630224       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:19.682668       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.477148ms\"\nI1002 13:40:20.684327       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:20.859174       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"174.903142ms\"\nI1002 13:40:24.437439       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:24.499914       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.499632ms\"\nI1002 13:40:26.795227       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:26.834956       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.790283ms\"\nI1002 13:40:27.549913       1 service.go:306] Service services-8289/tolerate-unready updated: 0 ports\nI1002 13:40:27.549962       1 service.go:446] Removing service port \"services-8289/tolerate-unready:http\"\nI1002 13:40:27.549998       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:27.591925       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.954027ms\"\nI1002 13:40:28.593068       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:28.652021       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.978669ms\"\nI1002 13:40:32.472213       1 service.go:306] Service ephemeral-1287-6716/csi-hostpathplugin updated: 1 ports\nI1002 13:40:32.472767       1 service.go:421] Adding new service port \"ephemeral-1287-6716/csi-hostpathplugin:dummy\" at 100.68.14.90:12345/TCP\nI1002 13:40:32.472936       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:32.523258       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.516796ms\"\nI1002 13:40:32.523481       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:32.576316       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.858215ms\"\nI1002 13:40:33.179666       1 service.go:306] Service services-8787/nodeport-test updated: 1 ports\nI1002 13:40:33.576442       1 service.go:421] Adding new service port \"services-8787/nodeport-test:http\" at 100.70.161.186:80/TCP\nI1002 13:40:33.576494       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:33.603224       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-8787/nodeport-test:http\\\" (:30291/tcp4)\"\nI1002 13:40:33.607090       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.664292ms\"\nI1002 13:40:35.233087       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:35.274398       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.334736ms\"\nI1002 13:40:36.481517       1 service.go:306] Service services-6530/affinity-clusterip-transition updated: 1 ports\nI1002 13:40:36.481565       1 service.go:421] Adding new service port \"services-6530/affinity-clusterip-transition\" at 100.71.90.34:80/TCP\nI1002 13:40:36.481602       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:36.529857       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.291033ms\"\nI1002 13:40:36.529997       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:36.586793       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.899199ms\"\nI1002 13:40:37.587828       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:37.634983       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.205085ms\"\nI1002 13:40:38.636559       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:38.723281       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.782862ms\"\nI1002 13:40:39.673433       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:39.725274       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.873171ms\"\nI1002 13:40:40.726077       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:40.858126       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"132.110037ms\"\nI1002 13:40:43.904077       1 service.go:306] Service provisioning-1384-4286/csi-hostpathplugin updated: 1 ports\nI1002 13:40:43.904427       1 service.go:421] Adding new service port \"provisioning-1384-4286/csi-hostpathplugin:dummy\" at 100.71.225.113:12345/TCP\nI1002 13:40:43.904518       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:43.941603       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.185419ms\"\nI1002 13:40:43.941917       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:43.977892       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.24282ms\"\nI1002 13:40:51.090017       1 service.go:306] Service services-6530/affinity-clusterip-transition updated: 1 ports\nI1002 13:40:51.090234       1 service.go:423] Updating existing service port \"services-6530/affinity-clusterip-transition\" at 100.71.90.34:80/TCP\nI1002 13:40:51.090385       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:51.162651       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.54096ms\"\nI1002 13:40:51.275167       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:51.309396       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.267917ms\"\nI1002 13:40:53.561340       1 service.go:306] Service services-6530/affinity-clusterip-transition updated: 1 ports\nI1002 13:40:53.561628       1 service.go:423] Updating existing service port \"services-6530/affinity-clusterip-transition\" at 100.71.90.34:80/TCP\nI1002 13:40:53.561753       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:53.600298       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.672141ms\"\nI1002 13:40:56.471675       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:56.507087       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.453808ms\"\nI1002 13:40:57.474442       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:57.506999       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.59302ms\"\nI1002 13:40:58.078504       1 service.go:306] Service conntrack-8083/svc-udp updated: 1 ports\nI1002 13:40:58.078543       1 service.go:421] Adding new service port \"conntrack-8083/svc-udp:udp\" at 100.67.91.190:80/UDP\nI1002 13:40:58.078579       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:58.112086       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.540927ms\"\nI1002 13:40:58.579426       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:58.664476       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"85.088985ms\"\nI1002 13:40:58.679732       1 service.go:306] Service services-8787/nodeport-test updated: 0 ports\nI1002 13:40:59.666151       1 service.go:446] Removing service port \"services-8787/nodeport-test:http\"\nI1002 13:40:59.666224       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:40:59.726163       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.014487ms\"\nI1002 13:40:59.741778       1 service.go:306] Service services-3177/sourceip-test updated: 1 ports\nI1002 13:41:00.727123       1 service.go:421] Adding new service port \"services-3177/sourceip-test\" at 100.64.145.91:8080/TCP\nI1002 13:41:00.727284       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:00.865294       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"138.1859ms\"\nI1002 13:41:02.630528       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:02.688363       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.866977ms\"\nI1002 13:41:06.233314       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:06.267970       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.745613ms\"\nI1002 13:41:06.408149       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-8083/svc-udp:udp\" clusterIP=\"100.67.91.190\"\nI1002 13:41:06.408282       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:06.445954       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.887436ms\"\nI1002 13:41:10.323323       1 service.go:306] Service services-6530/affinity-clusterip-transition updated: 0 ports\nI1002 13:41:10.324829       1 service.go:446] Removing service port \"services-6530/affinity-clusterip-transition\"\nI1002 13:41:10.324968       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:10.387676       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.834814ms\"\nI1002 13:41:10.387875       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:10.448245       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.401168ms\"\nI1002 13:41:17.597667       1 service.go:306] Service webhook-5500/e2e-test-webhook updated: 1 ports\nI1002 13:41:17.597712       1 service.go:421] Adding new service port \"webhook-5500/e2e-test-webhook\" at 100.65.252.177:8443/TCP\nI1002 13:41:17.598183       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:17.657571       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.848168ms\"\nI1002 13:41:17.658164       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:17.709598       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.990506ms\"\nI1002 13:41:20.369561       1 service.go:306] Service webhook-5500/e2e-test-webhook updated: 0 ports\nI1002 13:41:20.369830       1 service.go:446] Removing service port \"webhook-5500/e2e-test-webhook\"\nI1002 13:41:20.369972       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:20.430027       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.185003ms\"\nI1002 13:41:20.430282       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:20.467736       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.674425ms\"\nI1002 13:41:21.467982       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:21.506165       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.235495ms\"\nI1002 13:41:22.501775       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:22.573752       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.036763ms\"\nI1002 13:41:22.688846       1 service.go:306] Service services-3177/sourceip-test updated: 0 ports\nI1002 13:41:23.574687       1 service.go:446] Removing service port \"services-3177/sourceip-test\"\nI1002 13:41:23.574866       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:23.612357       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.666417ms\"\nI1002 13:41:33.763020       1 service.go:306] Service provisioning-5029-9776/csi-hostpathplugin updated: 1 ports\nI1002 13:41:33.763239       1 service.go:421] Adding new service port \"provisioning-5029-9776/csi-hostpathplugin:dummy\" at 100.71.108.225:12345/TCP\nI1002 13:41:33.763349       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:33.872083       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"108.839247ms\"\nI1002 13:41:33.872171       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:33.926070       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.941842ms\"\nI1002 13:41:34.801552       1 service.go:306] Service ephemeral-2177-4926/csi-hostpathplugin updated: 0 ports\nI1002 13:41:34.804036       1 service.go:446] Removing service port \"ephemeral-2177-4926/csi-hostpathplugin:dummy\"\nI1002 13:41:34.804231       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:34.910749       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"106.69964ms\"\nI1002 13:41:35.814885       1 service.go:306] Service kubectl-2513/agnhost-primary updated: 1 ports\nI1002 13:41:35.815118       1 service.go:421] Adding new service port \"kubectl-2513/agnhost-primary\" at 100.65.124.0:6379/TCP\nI1002 13:41:35.815299       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:36.044729       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"229.606917ms\"\nI1002 13:41:37.044919       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:37.090067       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.193701ms\"\nI1002 13:41:38.746038       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:38.814021       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.024686ms\"\nI1002 13:41:38.826290       1 service.go:306] Service conntrack-8083/svc-udp updated: 0 ports\nI1002 13:41:38.826391       1 service.go:446] Removing service port \"conntrack-8083/svc-udp:udp\"\nI1002 13:41:38.826488       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:38.890889       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.480694ms\"\nI1002 13:41:44.636758       1 service.go:306] Service kubectl-2513/agnhost-primary updated: 0 ports\nI1002 13:41:44.636959       1 service.go:446] Removing service port \"kubectl-2513/agnhost-primary\"\nI1002 13:41:44.637093       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:44.713441       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.474305ms\"\nI1002 13:41:44.713648       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:44.779893       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.280375ms\"\nI1002 13:41:45.857928       1 service.go:306] Service provisioning-3835-3579/csi-hostpathplugin updated: 1 ports\nI1002 13:41:45.858189       1 service.go:421] Adding new service port \"provisioning-3835-3579/csi-hostpathplugin:dummy\" at 100.68.249.246:12345/TCP\nI1002 13:41:45.858341       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:45.901008       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.824765ms\"\nI1002 13:41:46.906216       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:46.989270       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.100249ms\"\nI1002 13:41:47.880508       1 service.go:306] Service services-4417/externalname-service updated: 1 ports\nI1002 13:41:47.880682       1 service.go:421] Adding new service port \"services-4417/externalname-service:http\" at 100.68.101.53:80/TCP\nI1002 13:41:47.880790       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:47.911528       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-4417/externalname-service:http\\\" (:31317/tcp4)\"\nI1002 13:41:47.915352       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.670781ms\"\nI1002 13:41:48.915529       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:48.956958       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.481356ms\"\nI1002 13:41:49.837802       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:49.880961       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.185018ms\"\nI1002 13:41:50.666753       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:50.731378       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.66384ms\"\nI1002 13:41:51.291468       1 service.go:306] Service conntrack-1597/boom-server updated: 1 ports\nI1002 13:41:51.733217       1 service.go:421] Adding new service port \"conntrack-1597/boom-server\" at 100.68.17.91:9000/TCP\nI1002 13:41:51.733393       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:51.763774       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.570186ms\"\nI1002 13:41:59.874931       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:41:59.913081       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.175819ms\"\nI1002 13:42:05.186253       1 service.go:306] Service services-4417/externalname-service updated: 0 ports\nI1002 13:42:05.186342       1 service.go:446] Removing service port \"services-4417/externalname-service:http\"\nI1002 13:42:05.186418       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:05.223928       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.581159ms\"\nI1002 13:42:05.224089       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:05.256481       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.516547ms\"\nI1002 13:42:08.427299       1 service.go:306] Service ephemeral-9915-93/csi-hostpathplugin updated: 1 ports\nI1002 13:42:08.427506       1 service.go:421] Adding new service port \"ephemeral-9915-93/csi-hostpathplugin:dummy\" at 100.68.79.32:12345/TCP\nI1002 13:42:08.427688       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:08.468960       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.614174ms\"\nI1002 13:42:08.469013       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:08.500765       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.771316ms\"\nW1002 13:42:10.111033       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1002 13:42:10.474792       1 service.go:306] Service provisioning-1384-4286/csi-hostpathplugin updated: 0 ports\nI1002 13:42:10.474901       1 service.go:446] Removing service port \"provisioning-1384-4286/csi-hostpathplugin:dummy\"\nI1002 13:42:10.474987       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:10.528340       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.431154ms\"\nI1002 13:42:10.528406       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:10.562512       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.128222ms\"\nI1002 13:42:12.956180       1 service.go:306] Service ephemeral-5508-1475/csi-hostpathplugin updated: 1 ports\nI1002 13:42:12.956223       1 service.go:421] Adding new service port \"ephemeral-5508-1475/csi-hostpathplugin:dummy\" at 100.64.126.131:12345/TCP\nI1002 13:42:12.956267       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:12.997773       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.54763ms\"\nI1002 13:42:12.997951       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:13.028186       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.37693ms\"\nI1002 13:42:20.963311       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:21.134800       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"171.51833ms\"\nI1002 13:42:21.134871       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:21.212452       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.600878ms\"\nI1002 13:42:32.892800       1 service.go:306] Service provisioning-3835-3579/csi-hostpathplugin updated: 0 ports\nI1002 13:42:32.893538       1 service.go:446] Removing service port \"provisioning-3835-3579/csi-hostpathplugin:dummy\"\nI1002 13:42:32.893655       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:32.929484       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.952276ms\"\nI1002 13:42:32.929552       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:32.970251       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.732499ms\"\nI1002 13:42:33.545292       1 service.go:306] Service provisioning-5029-9776/csi-hostpathplugin updated: 0 ports\nI1002 13:42:33.970886       1 service.go:446] Removing service port \"provisioning-5029-9776/csi-hostpathplugin:dummy\"\nI1002 13:42:33.970978       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:34.023216       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.342912ms\"\nI1002 13:42:51.229072       1 service.go:306] Service provisioning-3514-7135/csi-hostpathplugin updated: 1 ports\nI1002 13:42:51.229260       1 service.go:421] Adding new service port \"provisioning-3514-7135/csi-hostpathplugin:dummy\" at 100.69.85.242:12345/TCP\nI1002 13:42:51.229362       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:51.267591       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.333246ms\"\nI1002 13:42:51.267724       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:51.298232       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.604472ms\"\nI1002 13:42:55.876634       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:55.919452       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.856674ms\"\nI1002 13:42:57.815462       1 service.go:306] Service webhook-2359/e2e-test-webhook updated: 1 ports\nI1002 13:42:57.815535       1 service.go:421] Adding new service port \"webhook-2359/e2e-test-webhook\" at 100.66.7.199:8443/TCP\nI1002 13:42:57.815611       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:57.879643       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.135352ms\"\nI1002 13:42:57.879819       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:42:57.937716       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.034492ms\"\nI1002 13:43:00.030283       1 service.go:306] Service webhook-198/e2e-test-webhook updated: 1 ports\nI1002 13:43:00.030416       1 service.go:421] Adding new service port \"webhook-198/e2e-test-webhook\" at 100.66.78.233:8443/TCP\nI1002 13:43:00.030569       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:00.118814       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.396163ms\"\nI1002 13:43:00.119046       1 proxier.go:857] \"Syncing iptables rules\"\nI1002 13:43:00.213707       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"94.724098ms\"\nI1002 13:43:02.805139       1 service.go:306] Service ephemeral-1287-6716/csi-hostpathplugin updated: 0 ports\nI1002 13:43:02.805362       1 service.go:446] Removing service port \"ephemeral-1287-6716/csi-hostpathplugin:dummy\"\nI1002 13:43:02.805463       1 proxier.go:857] \"Synci