This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-06-17 00:43
Elapsed38m56s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 125 lines ...
I0617 00:44:18.012682    4079 up.go:43] Cleaning up any leaked resources from previous cluster
I0617 00:44:18.012785    4079 dumplogs.go:38] /logs/artifacts/f91f1734-cf04-11eb-8f2c-1681a1291760/kops toolbox dump --name e2e-bf5376b553-82074.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0617 00:44:18.028779    4099 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0617 00:44:18.028875    4099 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-bf5376b553-82074.test-cncf-aws.k8s.io" not found
W0617 00:44:18.553123    4079 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0617 00:44:18.553189    4079 down.go:48] /logs/artifacts/f91f1734-cf04-11eb-8f2c-1681a1291760/kops delete cluster --name e2e-bf5376b553-82074.test-cncf-aws.k8s.io --yes
I0617 00:44:18.570251    4109 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0617 00:44:18.570366    4109 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-bf5376b553-82074.test-cncf-aws.k8s.io" not found
I0617 00:44:19.377594    4079 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/06/17 00:44:19 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0617 00:44:19.385628    4079 http.go:37] curl https://ip.jsb.workers.dev
I0617 00:44:19.499239    4079 up.go:144] /logs/artifacts/f91f1734-cf04-11eb-8f2c-1681a1291760/kops create cluster --name e2e-bf5376b553-82074.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.2 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210610 --channel=alpha --networking=flannel --container-runtime=docker --admin-access 34.66.34.40/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones sa-east-1a --master-size c5.large
I0617 00:44:19.514701    4119 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0617 00:44:19.514816    4119 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0617 00:44:19.558817    4119 create_cluster.go:748] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0617 00:44:20.058942    4119 new_cluster.go:1054]  Cloud Provider ID = aws
... skipping 41 lines ...

I0617 00:44:49.885341    4079 up.go:181] /logs/artifacts/f91f1734-cf04-11eb-8f2c-1681a1291760/kops validate cluster --name e2e-bf5376b553-82074.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0617 00:44:49.910405    4140 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0617 00:44:49.910535    4140 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-bf5376b553-82074.test-cncf-aws.k8s.io

W0617 00:44:51.340220    4140 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:45:01.387215    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:45:11.418557    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:45:21.464422    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:45:31.492092    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:45:41.535430    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:45:51.581282    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:46:01.726624    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:46:11.761986    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:46:21.793227    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:46:31.824915    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:46:41.857856    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:46:51.888453    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:47:01.948859    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:47:11.977745    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:47:22.010359    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
W0617 00:47:32.042597    4140 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:47:42.092128    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:47:52.122368    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:48:02.155237    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:48:12.187727    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:48:22.225014    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0617 00:48:32.254789    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
W0617 00:48:42.284928    4140 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
... skipping 13 lines ...
Pod	kube-system/kube-flannel-ds-6nvnl					system-node-critical pod "kube-flannel-ds-6nvnl" is pending
Pod	kube-system/kube-flannel-ds-b5hmw					system-node-critical pod "kube-flannel-ds-b5hmw" is pending
Pod	kube-system/kube-flannel-ds-smz9r					system-node-critical pod "kube-flannel-ds-smz9r" is pending
Pod	kube-system/kube-proxy-ip-172-20-48-221.sa-east-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-48-221.sa-east-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-60-41.sa-east-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-60-41.sa-east-1.compute.internal" is pending

Validation Failed
W0617 00:48:56.356718    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 11 lines ...
Node	ip-172-20-55-34.sa-east-1.compute.internal	node "ip-172-20-55-34.sa-east-1.compute.internal" of role "node" is not ready
Node	ip-172-20-60-41.sa-east-1.compute.internal	node "ip-172-20-60-41.sa-east-1.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-autoscaler-6f594f4c58-xchvv	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-xchvv" is pending
Pod	kube-system/coredns-f45c4bf76-pcz96		system-cluster-critical pod "coredns-f45c4bf76-pcz96" is pending
Pod	kube-system/kube-flannel-ds-9ff4g		system-node-critical pod "kube-flannel-ds-9ff4g" is pending

Validation Failed
W0617 00:49:09.071645    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-55-34.sa-east-1.compute.internal	node "ip-172-20-55-34.sa-east-1.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-f45c4bf76-sng9x		system-cluster-critical pod "coredns-f45c4bf76-sng9x" is pending

Validation Failed
W0617 00:49:21.564012    4140 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 678 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "nfs" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 144 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 394 lines ...
Jun 17 00:52:02.035: INFO: AfterEach: Cleaning up test resources.
Jun 17 00:52:02.035: INFO: Deleting PersistentVolumeClaim "pvc-cvl6d"
Jun 17 00:52:02.186: INFO: Deleting PersistentVolume "hostpath-cgpls"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:02.362: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:52:02.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1614" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:02.623: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 65 lines ...
• [SLOW TEST:12.801 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:11.821: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 81 lines ...
• [SLOW TEST:14.148 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:12.894: INFO: Driver local doesn't support ext3 -- skipping
... skipping 49 lines ...
• [SLOW TEST:13.437 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:52:14.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5386" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:14.618: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 94 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jun 17 00:52:03.318: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d" in namespace "downward-api-4231" to be "Succeeded or Failed"
Jun 17 00:52:03.476: INFO: Pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 157.658401ms
Jun 17 00:52:05.622: INFO: Pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303441971s
Jun 17 00:52:07.770: INFO: Pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451985304s
Jun 17 00:52:09.917: INFO: Pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598379803s
Jun 17 00:52:12.063: INFO: Pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.744269537s
Jun 17 00:52:14.208: INFO: Pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.890077131s
Jun 17 00:52:16.354: INFO: Pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.03532885s
STEP: Saw pod success
Jun 17 00:52:16.354: INFO: Pod "downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d" satisfied condition "Succeeded or Failed"
Jun 17 00:52:16.499: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d container client-container: <nil>
STEP: delete the pod
Jun 17 00:52:16.817: INFO: Waiting for pod downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d to disappear
Jun 17 00:52:16.962: INFO: Pod downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.858 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Jun 17 00:52:00.887: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 00:52:01.039: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun 17 00:52:01.526: INFO: Waiting up to 5m0s for pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576" in namespace "emptydir-5480" to be "Succeeded or Failed"
Jun 17 00:52:01.670: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576": Phase="Pending", Reason="", readiness=false. Elapsed: 143.794229ms
Jun 17 00:52:03.815: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288899028s
Jun 17 00:52:05.970: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444281848s
Jun 17 00:52:08.115: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588739833s
Jun 17 00:52:10.260: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734020886s
Jun 17 00:52:12.404: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576": Phase="Pending", Reason="", readiness=false. Elapsed: 10.878093709s
Jun 17 00:52:14.549: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576": Phase="Pending", Reason="", readiness=false. Elapsed: 13.023594637s
Jun 17 00:52:16.694: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.168397901s
STEP: Saw pod success
Jun 17 00:52:16.694: INFO: Pod "pod-1bad920c-7687-40c0-b6aa-17707ce0d576" satisfied condition "Succeeded or Failed"
Jun 17 00:52:16.843: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-1bad920c-7687-40c0-b6aa-17707ce0d576 container test-container: <nil>
STEP: delete the pod
Jun 17 00:52:17.152: INFO: Waiting for pod pod-1bad920c-7687-40c0-b6aa-17707ce0d576 to disappear
Jun 17 00:52:17.296: INFO: Pod pod-1bad920c-7687-40c0-b6aa-17707ce0d576 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:18.510 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Jun 17 00:51:59.687: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 00:51:59.831: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Jun 17 00:52:00.300: INFO: Waiting up to 5m0s for pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9" in namespace "var-expansion-7826" to be "Succeeded or Failed"
Jun 17 00:52:00.447: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 146.367865ms
Jun 17 00:52:02.598: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297491919s
Jun 17 00:52:04.743: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442756641s
Jun 17 00:52:06.887: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.587221733s
Jun 17 00:52:09.033: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.732779483s
Jun 17 00:52:11.180: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.879430667s
Jun 17 00:52:13.324: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.024238669s
Jun 17 00:52:15.469: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.168788942s
Jun 17 00:52:17.614: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.313437197s
STEP: Saw pod success
Jun 17 00:52:17.614: INFO: Pod "var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9" satisfied condition "Succeeded or Failed"
Jun 17 00:52:17.758: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9 container dapi-container: <nil>
STEP: delete the pod
Jun 17 00:52:18.056: INFO: Waiting for pod var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9 to disappear
Jun 17 00:52:18.200: INFO: Pod var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:19.534 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:19.502 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 42 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
Jun 17 00:52:18.963: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 17 00:52:18.963: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9461 describe pod agnhost-primary-jvn66'
Jun 17 00:52:19.876: INFO: stderr: ""
Jun 17 00:52:19.876: INFO: stdout: "Name:         agnhost-primary-jvn66\nNamespace:    kubectl-9461\nPriority:     0\nNode:         ip-172-20-60-41.sa-east-1.compute.internal/172.20.60.41\nStart Time:   Thu, 17 Jun 2021 00:52:16 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.3.9\nIPs:\n  IP:           100.96.3.9\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   docker://f56a92643c8c6133d046f02056b01ad97bbc32c24338ebf850cf311b9fdf1c84\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 17 Jun 2021 00:52:17 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzzlz (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-zzzlz:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-9461/agnhost-primary-jvn66 to ip-172-20-60-41.sa-east-1.compute.internal\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    2s    kubelet            Started container agnhost-primary\n"
Jun 17 00:52:19.876: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9461 describe rc agnhost-primary'
Jun 17 00:52:20.854: INFO: stderr: ""
Jun 17 00:52:20.854: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-9461\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-primary-jvn66\n"
Jun 17 00:52:20.854: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9461 describe service agnhost-primary'
Jun 17 00:52:21.835: INFO: stderr: ""
Jun 17 00:52:21.835: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-9461\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.68.116.152\nIPs:               100.68.116.152\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.3.9:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jun 17 00:52:21.980: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9461 describe node ip-172-20-38-69.sa-east-1.compute.internal'
Jun 17 00:52:23.481: INFO: stderr: ""
Jun 17 00:52:23.482: INFO: stdout: "Name:               ip-172-20-38-69.sa-east-1.compute.internal\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=c5.large\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=sa-east-1\n                    failure-domain.beta.kubernetes.io/zone=sa-east-1a\n                    kops.k8s.io/instancegroup=master-sa-east-1a\n                    kops.k8s.io/kops-controller-pki=\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-38-69.sa-east-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=master\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\n                    node.kubernetes.io/instance-type=c5.large\n                    topology.kubernetes.io/region=sa-east-1\n                    topology.kubernetes.io/zone=sa-east-1a\nAnnotations:        flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"5a:cf:18:54:d7:16\"}\n                    flannel.alpha.coreos.com/backend-type: vxlan\n                    flannel.alpha.coreos.com/kube-subnet-manager: true\n                    flannel.alpha.coreos.com/public-ip: 172.20.38.69\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 17 Jun 2021 00:47:24 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-38-69.sa-east-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Thu, 17 Jun 2021 00:52:15 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 17 Jun 2021 00:48:03 +0000   Thu, 17 Jun 2021 00:48:03 +0000   FlannelIsUp                  Flannel is running on this node\n  MemoryPressure       False   Thu, 17 Jun 2021 00:48:24 +0000   Thu, 17 Jun 2021 00:47:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 17 Jun 2021 00:48:24 +0000   Thu, 17 Jun 2021 00:47:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 17 Jun 2021 00:48:24 +0000   Thu, 17 Jun 2021 00:47:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 17 Jun 2021 00:48:24 +0000   Thu, 17 Jun 2021 00:48:04 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   172.20.38.69\n  ExternalIP:   52.67.87.221\n  Hostname:     ip-172-20-38-69.sa-east-1.compute.internal\n  InternalDNS:  ip-172-20-38-69.sa-east-1.compute.internal\n  ExternalDNS:  ec2-52-67-87-221.sa-east-1.compute.amazonaws.com\nCapacity:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           48725632Ki\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3784392Ki\n  pods:                        110\nAllocatable:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           44905542377\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3681992Ki\n  pods:                        110\nSystem Info:\n  Machine ID:                 ec22f175930dd2738bcda119f08e03ec\n  System UUID:                ec22f175-930d-d273-8bcd-a119f08e03ec\n  Boot ID:                    89ac830a-a223-4cfd-af79-b567ae2bc4f6\n  Kernel Version:             5.8.0-1035-aws\n  OS Image:                   Ubuntu 20.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://20.10.5\n  Kubelet Version:            v1.21.2\n  Kube-Proxy Version:         v1.21.2\nPodCIDR:                      100.96.0.0/24\nPodCIDRs:                     100.96.0.0/24\nProviderID:                   aws:///sa-east-1a/i-0fbf20cba103a55bd\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                                  ------------  ----------  ---------------  -------------  ---\n  kube-system                 dns-controller-5f98b58844-cjrql                                       50m (2%)      0 (0%)      50Mi (1%)        0 (0%)         4m45s\n  kube-system                 etcd-manager-events-ip-172-20-38-69.sa-east-1.compute.internal        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m15s\n  kube-system                 etcd-manager-main-ip-172-20-38-69.sa-east-1.compute.internal          200m (10%)    0 (0%)      100Mi (2%)       0 (0%)         4m5s\n  kube-system                 kops-controller-fltkc                                                 50m (2%)      0 (0%)      50Mi (1%)        0 (0%)         3m54s\n  kube-system                 kube-apiserver-ip-172-20-38-69.sa-east-1.compute.internal             150m (7%)     0 (0%)      0 (0%)           0 (0%)         4m12s\n  kube-system                 kube-controller-manager-ip-172-20-38-69.sa-east-1.compute.internal    100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m25s\n  kube-system                 kube-flannel-ds-h4s24                                                 100m (5%)     0 (0%)      100Mi (2%)       100Mi (2%)     4m46s\n  kube-system                 kube-proxy-ip-172-20-38-69.sa-east-1.compute.internal                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m11s\n  kube-system                 kube-scheduler-ip-172-20-38-69.sa-east-1.compute.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m29s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests     Limits\n  --------                    --------     ------\n  cpu                         950m (47%)   0 (0%)\n  memory                      400Mi (11%)  100Mi (2%)\n  ephemeral-storage           0 (0%)       0 (0%)\n  hugepages-1Gi               0 (0%)       0 (0%)\n  hugepages-2Mi               0 (0%)       0 (0%)\n  attachable-volumes-aws-ebs  0            0\nEvents:\n  Type    Reason                   Age              From        Message\n  ----    ------                   ----             ----        -------\n  Normal  Starting                 6m1s             kubelet     Starting kubelet.\n  Normal  NodeHasSufficientMemory  6m (x8 over 6m)  kubelet     Node ip-172-20-38-69.sa-east-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)  kubelet     Node ip-172-20-38-69.sa-east-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     6m (x7 over 6m)  kubelet     Node ip-172-20-38-69.sa-east-1.compute.internal status is now: NodeHasSufficientPID\n  Normal  NodeAllocatableEnforced  6m               kubelet     Updated Node Allocatable limit across pods\n  Normal  Starting                 4m56s            kube-proxy  Starting kube-proxy.\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:24.766: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
Jun 17 00:52:01.577: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 00:52:01.721: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Jun 17 00:52:02.163: INFO: Waiting up to 5m0s for pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f" in namespace "var-expansion-1992" to be "Succeeded or Failed"
Jun 17 00:52:02.307: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 144.109549ms
Jun 17 00:52:04.452: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288628909s
Jun 17 00:52:06.602: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439039891s
Jun 17 00:52:08.747: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584011159s
Jun 17 00:52:10.893: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.729543309s
Jun 17 00:52:13.038: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.874334302s
Jun 17 00:52:15.187: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.023630756s
Jun 17 00:52:17.331: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.167873867s
Jun 17 00:52:19.477: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.314209472s
Jun 17 00:52:21.622: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.45835752s
Jun 17 00:52:23.766: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.602926975s
STEP: Saw pod success
Jun 17 00:52:23.766: INFO: Pod "var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f" satisfied condition "Succeeded or Failed"
Jun 17 00:52:23.910: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f container dapi-container: <nil>
STEP: delete the pod
Jun 17 00:52:24.207: INFO: Waiting for pod var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f to disappear
Jun 17 00:52:24.351: INFO: Pod var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:25.501 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:24.811: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 79 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:52:02.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 92 lines ...
• [SLOW TEST:26.099 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:25.157: INFO: Only supported for providers [openstack] (not aws)
... skipping 35 lines ...
Jun 17 00:52:25.416: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.020 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 33 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:52:25.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9837" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:25.960: INFO: Only supported for providers [gce gke] (not aws)
... skipping 100 lines ...
Jun 17 00:52:22.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Jun 17 00:52:23.144: INFO: Waiting up to 5m0s for pod "client-containers-6feb4ba8-bf2f-4944-b554-d98ecf8536e2" in namespace "containers-3515" to be "Succeeded or Failed"
Jun 17 00:52:23.313: INFO: Pod "client-containers-6feb4ba8-bf2f-4944-b554-d98ecf8536e2": Phase="Pending", Reason="", readiness=false. Elapsed: 168.347714ms
Jun 17 00:52:25.457: INFO: Pod "client-containers-6feb4ba8-bf2f-4944-b554-d98ecf8536e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.313037812s
STEP: Saw pod success
Jun 17 00:52:25.457: INFO: Pod "client-containers-6feb4ba8-bf2f-4944-b554-d98ecf8536e2" satisfied condition "Succeeded or Failed"
Jun 17 00:52:25.601: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod client-containers-6feb4ba8-bf2f-4944-b554-d98ecf8536e2 container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:52:25.900: INFO: Waiting for pod client-containers-6feb4ba8-bf2f-4944-b554-d98ecf8536e2 to disappear
Jun 17 00:52:26.043: INFO: Pod client-containers-6feb4ba8-bf2f-4944-b554-d98ecf8536e2 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:52:26.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3515" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:26.345: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
Jun 17 00:52:00.024: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 00:52:00.169: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
Jun 17 00:52:00.461: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 00:52:00.926: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3125" in namespace "provisioning-3125" to be "Succeeded or Failed"
Jun 17 00:52:01.112: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 185.291832ms
Jun 17 00:52:03.259: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332628464s
Jun 17 00:52:05.406: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479567274s
Jun 17 00:52:07.551: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 6.624812827s
Jun 17 00:52:09.696: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 8.769148892s
Jun 17 00:52:11.841: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 10.91408913s
... skipping 2 lines ...
Jun 17 00:52:18.280: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 17.354039198s
Jun 17 00:52:20.426: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 19.499649798s
Jun 17 00:52:22.571: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 21.644685084s
Jun 17 00:52:24.716: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 23.789837397s
Jun 17 00:52:26.861: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.934439527s
STEP: Saw pod success
Jun 17 00:52:26.861: INFO: Pod "hostpath-symlink-prep-provisioning-3125" satisfied condition "Succeeded or Failed"
Jun 17 00:52:26.861: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3125" in namespace "provisioning-3125"
Jun 17 00:52:27.022: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3125" to be fully deleted
Jun 17 00:52:27.166: INFO: Creating resource for inline volume
Jun 17 00:52:27.166: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Jun 17 00:52:27.166: INFO: Deleting pod "pod-subpath-test-inlinevolume-flll" in namespace "provisioning-3125"
Jun 17 00:52:27.456: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3125" in namespace "provisioning-3125" to be "Succeeded or Failed"
Jun 17 00:52:27.600: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Pending", Reason="", readiness=false. Elapsed: 144.305297ms
Jun 17 00:52:29.746: INFO: Pod "hostpath-symlink-prep-provisioning-3125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289922078s
STEP: Saw pod success
Jun 17 00:52:29.746: INFO: Pod "hostpath-symlink-prep-provisioning-3125" satisfied condition "Succeeded or Failed"
Jun 17 00:52:29.746: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3125" in namespace "provisioning-3125"
Jun 17 00:52:29.921: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3125" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:52:30.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3125" for this suite.
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 17 00:52:25.809: INFO: Waiting up to 5m0s for pod "pod-7befc04c-914a-475c-b4ba-564e61d277b5" in namespace "emptydir-3568" to be "Succeeded or Failed"
Jun 17 00:52:25.953: INFO: Pod "pod-7befc04c-914a-475c-b4ba-564e61d277b5": Phase="Pending", Reason="", readiness=false. Elapsed: 144.037377ms
Jun 17 00:52:28.098: INFO: Pod "pod-7befc04c-914a-475c-b4ba-564e61d277b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289287912s
Jun 17 00:52:30.245: INFO: Pod "pod-7befc04c-914a-475c-b4ba-564e61d277b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436301597s
Jun 17 00:52:32.390: INFO: Pod "pod-7befc04c-914a-475c-b4ba-564e61d277b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.581558573s
STEP: Saw pod success
Jun 17 00:52:32.390: INFO: Pod "pod-7befc04c-914a-475c-b4ba-564e61d277b5" satisfied condition "Succeeded or Failed"
Jun 17 00:52:32.535: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-7befc04c-914a-475c-b4ba-564e61d277b5 container test-container: <nil>
STEP: delete the pod
Jun 17 00:52:32.841: INFO: Waiting for pod pod-7befc04c-914a-475c-b4ba-564e61d277b5 to disappear
Jun 17 00:52:32.985: INFO: Pod pod-7befc04c-914a-475c-b4ba-564e61d277b5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":2,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:33.317: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":20,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:52:25.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
• [SLOW TEST:8.301 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":2,"skipped":20,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:35.747: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":3,"skipped":46,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:35.811: INFO: Only supported for providers [azure] (not aws)
... skipping 79 lines ...
STEP: Destroying namespace "apply-7572" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":3,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:36.729: INFO: Only supported for providers [openstack] (not aws)
... skipping 251 lines ...
• [SLOW TEST:37.986 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 219 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:52:38.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-7436" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:40.393: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 107 lines ...
• [SLOW TEST:30.156 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:45.319: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Jun 17 00:52:37.753: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6123" to be "Succeeded or Failed"
Jun 17 00:52:37.897: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 143.797199ms
Jun 17 00:52:40.044: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290685778s
Jun 17 00:52:42.188: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434497415s
Jun 17 00:52:44.337: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583360186s
Jun 17 00:52:46.483: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.729209286s
STEP: Saw pod success
Jun 17 00:52:46.483: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun 17 00:52:46.626: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jun 17 00:52:46.920: INFO: Waiting for pod pod-host-path-test to disappear
Jun 17 00:52:47.063: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.481 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":4,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:47.365: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
Jun 17 00:52:36.699: INFO: PersistentVolumeClaim pvc-6z447 found but phase is Pending instead of Bound.
Jun 17 00:52:38.844: INFO: PersistentVolumeClaim pvc-6z447 found and phase=Bound (15.163043922s)
Jun 17 00:52:38.844: INFO: Waiting up to 3m0s for PersistentVolume local-njsc5 to have phase Bound
Jun 17 00:52:38.987: INFO: PersistentVolume local-njsc5 found and phase=Bound (142.807079ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-htlj
STEP: Creating a pod to test subpath
Jun 17 00:52:39.417: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-htlj" in namespace "provisioning-4746" to be "Succeeded or Failed"
Jun 17 00:52:39.560: INFO: Pod "pod-subpath-test-preprovisionedpv-htlj": Phase="Pending", Reason="", readiness=false. Elapsed: 142.988234ms
Jun 17 00:52:41.704: INFO: Pod "pod-subpath-test-preprovisionedpv-htlj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286627733s
Jun 17 00:52:43.859: INFO: Pod "pod-subpath-test-preprovisionedpv-htlj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.442474892s
STEP: Saw pod success
Jun 17 00:52:43.859: INFO: Pod "pod-subpath-test-preprovisionedpv-htlj" satisfied condition "Succeeded or Failed"
Jun 17 00:52:44.017: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-htlj container test-container-subpath-preprovisionedpv-htlj: <nil>
STEP: delete the pod
Jun 17 00:52:44.330: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-htlj to disappear
Jun 17 00:52:44.474: INFO: Pod pod-subpath-test-preprovisionedpv-htlj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-htlj
Jun 17 00:52:44.474: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-htlj" in namespace "provisioning-4746"
... skipping 52 lines ...
• [SLOW TEST:49.603 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:48.693: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 72 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":4,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:52:54.630: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Jun 17 00:52:48.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Jun 17 00:52:49.553: INFO: Waiting up to 5m0s for pod "test-pod-07f6b687-d562-495b-9875-96950a907e2e" in namespace "svcaccounts-2184" to be "Succeeded or Failed"
Jun 17 00:52:49.696: INFO: Pod "test-pod-07f6b687-d562-495b-9875-96950a907e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 142.769502ms
Jun 17 00:52:51.840: INFO: Pod "test-pod-07f6b687-d562-495b-9875-96950a907e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286859559s
Jun 17 00:52:53.984: INFO: Pod "test-pod-07f6b687-d562-495b-9875-96950a907e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431046479s
Jun 17 00:52:56.129: INFO: Pod "test-pod-07f6b687-d562-495b-9875-96950a907e2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.575592053s
STEP: Saw pod success
Jun 17 00:52:56.129: INFO: Pod "test-pod-07f6b687-d562-495b-9875-96950a907e2e" satisfied condition "Succeeded or Failed"
Jun 17 00:52:56.282: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod test-pod-07f6b687-d562-495b-9875-96950a907e2e container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:52:56.577: INFO: Waiting for pod test-pod-07f6b687-d562-495b-9875-96950a907e2e to disappear
Jun 17 00:52:56.721: INFO: Pod test-pod-07f6b687-d562-495b-9875-96950a907e2e no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.321 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7349
STEP: Creating statefulset with conflicting port in namespace statefulset-7349
STEP: Waiting until pod test-pod will start running in namespace statefulset-7349
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7349
Jun 17 00:52:32.439: INFO: Observed stateful pod in namespace: statefulset-7349, name: ss-0, uid: 078da2b8-b030-4bf0-8f99-f8fa4a48c8e8, status phase: Pending. Waiting for statefulset controller to delete.
Jun 17 00:52:32.770: INFO: Observed stateful pod in namespace: statefulset-7349, name: ss-0, uid: 078da2b8-b030-4bf0-8f99-f8fa4a48c8e8, status phase: Failed. Waiting for statefulset controller to delete.
Jun 17 00:52:32.775: INFO: Observed stateful pod in namespace: statefulset-7349, name: ss-0, uid: 078da2b8-b030-4bf0-8f99-f8fa4a48c8e8, status phase: Failed. Waiting for statefulset controller to delete.
Jun 17 00:52:32.778: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7349
STEP: Removing pod with conflicting port in namespace statefulset-7349
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7349 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
Jun 17 00:52:39.519: INFO: Deleting all statefulset in ns statefulset-7349
... skipping 39 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:01.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:04.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8255" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":5,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:04.911: INFO: Driver "nfs" does not support topology - skipping
... skipping 23 lines ...
Jun 17 00:52:30.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 17 00:52:31.477: INFO: Waiting up to 5m0s for pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402" in namespace "security-context-8119" to be "Succeeded or Failed"
Jun 17 00:52:31.624: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 146.817609ms
Jun 17 00:52:33.781: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303359577s
Jun 17 00:52:35.929: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451532894s
Jun 17 00:52:38.074: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596797863s
Jun 17 00:52:40.220: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 8.742885602s
Jun 17 00:52:42.365: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 10.887699669s
... skipping 6 lines ...
Jun 17 00:52:57.385: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 25.907734199s
Jun 17 00:52:59.530: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 28.0531803s
Jun 17 00:53:01.676: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 30.198540079s
Jun 17 00:53:03.821: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Pending", Reason="", readiness=false. Elapsed: 32.344113627s
Jun 17 00:53:05.966: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.489040922s
STEP: Saw pod success
Jun 17 00:53:05.966: INFO: Pod "security-context-6359b8cf-649c-4e79-94b8-0392c4f35402" satisfied condition "Succeeded or Failed"
Jun 17 00:53:06.111: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod security-context-6359b8cf-649c-4e79-94b8-0392c4f35402 container test-container: <nil>
STEP: delete the pod
Jun 17 00:53:06.413: INFO: Waiting for pod security-context-6359b8cf-649c-4e79-94b8-0392c4f35402 to disappear
Jun 17 00:53:06.565: INFO: Pod security-context-6359b8cf-649c-4e79-94b8-0392c4f35402 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 32 lines ...
• [SLOW TEST:71.119 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:10.151: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:06.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 83 lines ...
Jun 17 00:53:00.614: INFO: Pod aws-client still exists
Jun 17 00:53:02.614: INFO: Waiting for pod aws-client to disappear
Jun 17 00:53:02.759: INFO: Pod aws-client still exists
Jun 17 00:53:04.614: INFO: Waiting for pod aws-client to disappear
Jun 17 00:53:04.759: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Jun 17 00:53:05.057: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f89d60cbe9ca2b50", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f89d60cbe9ca2b50 is currently attached to i-020119d91fc43a3d7
	status code: 400, request id: 391a2409-011c-4ad5-8600-f9050e6e1b2b
Jun 17 00:53:10.961: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0f89d60cbe9ca2b50".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:10.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5887" for this suite.
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:15.545: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 86 lines ...
Jun 17 00:52:51.319: INFO: PersistentVolumeClaim pvc-trtkj found but phase is Pending instead of Bound.
Jun 17 00:52:53.463: INFO: PersistentVolumeClaim pvc-trtkj found and phase=Bound (4.433001232s)
Jun 17 00:52:53.463: INFO: Waiting up to 3m0s for PersistentVolume local-5xhrq to have phase Bound
Jun 17 00:52:53.607: INFO: PersistentVolume local-5xhrq found and phase=Bound (143.893158ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-5dgt
STEP: Creating a pod to test exec-volume-test
Jun 17 00:52:54.050: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-5dgt" in namespace "volume-7267" to be "Succeeded or Failed"
Jun 17 00:52:54.194: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 144.086477ms
Jun 17 00:52:56.339: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289045124s
Jun 17 00:52:58.486: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436506064s
Jun 17 00:53:00.632: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582557977s
Jun 17 00:53:02.778: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.727866907s
Jun 17 00:53:04.922: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872568123s
Jun 17 00:53:07.074: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.024286698s
Jun 17 00:53:09.223: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 15.173600497s
Jun 17 00:53:11.379: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Pending", Reason="", readiness=false. Elapsed: 17.329662561s
Jun 17 00:53:13.526: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.476001331s
STEP: Saw pod success
Jun 17 00:53:13.526: INFO: Pod "exec-volume-test-preprovisionedpv-5dgt" satisfied condition "Succeeded or Failed"
Jun 17 00:53:13.670: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-5dgt container exec-container-preprovisionedpv-5dgt: <nil>
STEP: delete the pod
Jun 17 00:53:13.966: INFO: Waiting for pod exec-volume-test-preprovisionedpv-5dgt to disappear
Jun 17 00:53:14.110: INFO: Pod exec-volume-test-preprovisionedpv-5dgt no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-5dgt
Jun 17 00:53:14.111: INFO: Deleting pod "exec-volume-test-preprovisionedpv-5dgt" in namespace "volume-7267"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":28,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:10.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1345
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":3,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:18.565: INFO: Only supported for providers [gce gke] (not aws)
... skipping 162 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:19.770: INFO: Only supported for providers [azure] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 17 00:53:16.843: INFO: Waiting up to 5m0s for pod "pod-ef3e96b4-4cf9-4fff-896b-57b330f0d3d0" in namespace "emptydir-8025" to be "Succeeded or Failed"
Jun 17 00:53:16.987: INFO: Pod "pod-ef3e96b4-4cf9-4fff-896b-57b330f0d3d0": Phase="Pending", Reason="", readiness=false. Elapsed: 144.119112ms
Jun 17 00:53:19.132: INFO: Pod "pod-ef3e96b4-4cf9-4fff-896b-57b330f0d3d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288834559s
STEP: Saw pod success
Jun 17 00:53:19.132: INFO: Pod "pod-ef3e96b4-4cf9-4fff-896b-57b330f0d3d0" satisfied condition "Succeeded or Failed"
Jun 17 00:53:19.277: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-ef3e96b4-4cf9-4fff-896b-57b330f0d3d0 container test-container: <nil>
STEP: delete the pod
Jun 17 00:53:19.574: INFO: Waiting for pod pod-ef3e96b4-4cf9-4fff-896b-57b330f0d3d0 to disappear
Jun 17 00:53:19.720: INFO: Pod pod-ef3e96b4-4cf9-4fff-896b-57b330f0d3d0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:19.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8025" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":3,"skipped":34,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 125 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Jun 17 00:53:09.501: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6759 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jun 17 00:53:11.380: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Jun 17 00:53:11.381: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6759 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jun 17 00:53:13.087: INFO: rc: 255
Jun 17 00:53:13.088: INFO: got err error running /tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6759 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0617 00:53:12.870584     200 merged_client_builder.go:163] Using in-cluster namespace
I0617 00:53:12.870798     200 merged_client_builder.go:121] Using in-cluster configuration
I0617 00:53:12.873466     200 merged_client_builder.go:121] Using in-cluster configuration
I0617 00:53:12.881342     200 merged_client_builder.go:121] Using in-cluster configuration
I0617 00:53:12.881766     200 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-6759/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0617 00:53:12.886666     200 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0009da000, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3054420, 0xc000000003, 0x0, 0x0, 0xc0005b4a80, 0x25f1c90, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3054420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0009fc090, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00056d2c0, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207cd20, 0xc00000cdc8, 0x1f06e70)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc00035eb00, 0xc000177b60, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Jun 17 00:53:13.088: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6759 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jun 17 00:53:14.755: INFO: rc: 255
Jun 17 00:53:14.755: INFO: got err error running /tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6759 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0617 00:53:14.592202     212 merged_client_builder.go:163] Using in-cluster namespace
I0617 00:53:14.604117     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 11 milliseconds
I0617 00:53:14.604209     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0617 00:53:14.606488     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0617 00:53:14.606722     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0617 00:53:14.606751     212 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0617 00:53:14.616208     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 9 milliseconds
I0617 00:53:14.616398     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0617 00:53:14.619231     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0617 00:53:14.619286     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0617 00:53:14.622362     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0617 00:53:14.622431     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0617 00:53:14.622643     212 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
F0617 00:53:14.622674     212 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc00042c380, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3054420, 0xc000000003, 0x0, 0x0, 0xc000264a10, 0x25f1c90, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3054420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0004db030, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000101d40, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207c080, 0xc000890510, 0x1f06e70)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004ecb00, 0xc000422f60, 0x1, 0x3)
... skipping 30 lines ...
	/usr/local/go/src/net/http/client.go:396 +0x337

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Jun 17 00:53:14.755: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6759 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jun 17 00:53:16.307: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jun 17 00:53:16.307: INFO: stdout: "I0617 00:53:16.213398     223 merged_client_builder.go:121] Using in-cluster configuration\nI0617 00:53:16.216003     223 merged_client_builder.go:121] Using in-cluster configuration\nI0617 00:53:16.223817     223 merged_client_builder.go:121] Using in-cluster configuration\nI0617 00:53:16.229723     223 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 5 milliseconds\nNo resources found in invalid namespace.\n"
Jun 17 00:53:16.307: INFO: stdout: I0617 00:53:16.213398     223 merged_client_builder.go:121] Using in-cluster configuration
... skipping 75 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should handle in-cluster config
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:636
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":1,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
Jun 17 00:52:53.606: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:52:53.753: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:52:54.188: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:52:54.332: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:52:54.477: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:52:54.622: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:52:54.920: INFO: Lookups using dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local jessie_udp@dns-test-service-2.dns-6590.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6590.svc.cluster.local]

Jun 17 00:53:00.065: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:00.209: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:00.354: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:00.498: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:00.931: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:01.075: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:01.219: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:01.363: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:01.652: INFO: Lookups using dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local jessie_udp@dns-test-service-2.dns-6590.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6590.svc.cluster.local]

Jun 17 00:53:05.070: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:05.214: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:05.359: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:05.503: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:05.935: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:06.079: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:06.224: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:06.372: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:06.660: INFO: Lookups using dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local jessie_udp@dns-test-service-2.dns-6590.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6590.svc.cluster.local]

Jun 17 00:53:10.067: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:10.212: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:10.356: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:10.501: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:10.933: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:11.077: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:11.243: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:11.392: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:11.747: INFO: Lookups using dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local jessie_udp@dns-test-service-2.dns-6590.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6590.svc.cluster.local]

Jun 17 00:53:15.068: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:15.215: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:15.359: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:15.502: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local from pod dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b: the server could not find the requested resource (get pods dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b)
Jun 17 00:53:16.660: INFO: Lookups using dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6590.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6590.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6590.svc.cluster.local]

Jun 17 00:53:21.658: INFO: DNS probes using dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 19 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Jun 17 00:53:19.415: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 00:53:19.560: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-th5z
STEP: Creating a pod to test subpath
Jun 17 00:53:19.706: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-th5z" in namespace "provisioning-4612" to be "Succeeded or Failed"
Jun 17 00:53:19.876: INFO: Pod "pod-subpath-test-inlinevolume-th5z": Phase="Pending", Reason="", readiness=false. Elapsed: 169.612154ms
Jun 17 00:53:22.021: INFO: Pod "pod-subpath-test-inlinevolume-th5z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.314651144s
STEP: Saw pod success
Jun 17 00:53:22.021: INFO: Pod "pod-subpath-test-inlinevolume-th5z" satisfied condition "Succeeded or Failed"
Jun 17 00:53:22.166: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-th5z container test-container-volume-inlinevolume-th5z: <nil>
STEP: delete the pod
Jun 17 00:53:22.489: INFO: Waiting for pod pod-subpath-test-inlinevolume-th5z to disappear
Jun 17 00:53:22.634: INFO: Pod pod-subpath-test-inlinevolume-th5z no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-th5z
Jun 17 00:53:22.634: INFO: Deleting pod "pod-subpath-test-inlinevolume-th5z" in namespace "provisioning-4612"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:22.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4612" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:23.226: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 110 lines ...
Jun 17 00:53:20.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun 17 00:53:23.793: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:24.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4920" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:24.396: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:22.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Jun 17 00:53:23.282: INFO: Waiting up to 5m0s for pod "busybox-user-0-37dab240-18cd-49bc-ac46-bddcd1ea08b2" in namespace "security-context-test-9481" to be "Succeeded or Failed"
Jun 17 00:53:23.426: INFO: Pod "busybox-user-0-37dab240-18cd-49bc-ac46-bddcd1ea08b2": Phase="Pending", Reason="", readiness=false. Elapsed: 143.536265ms
Jun 17 00:53:25.570: INFO: Pod "busybox-user-0-37dab240-18cd-49bc-ac46-bddcd1ea08b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287477695s
Jun 17 00:53:25.570: INFO: Pod "busybox-user-0-37dab240-18cd-49bc-ac46-bddcd1ea08b2" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:25.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9481" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:25.935: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 36 lines ...
• [SLOW TEST:61.298 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:27.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
Jun 17 00:53:28.430: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-9ce390ec-c4a5-40fb-993d-c18ac63c499c" in namespace "security-context-test-7370" to be "Succeeded or Failed"
Jun 17 00:53:28.573: INFO: Pod "busybox-privileged-true-9ce390ec-c4a5-40fb-993d-c18ac63c499c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.117251ms
Jun 17 00:53:30.717: INFO: Pod "busybox-privileged-true-9ce390ec-c4a5-40fb-993d-c18ac63c499c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287023804s
Jun 17 00:53:30.717: INFO: Pod "busybox-privileged-true-9ce390ec-c4a5-40fb-993d-c18ac63c499c" satisfied condition "Succeeded or Failed"
Jun 17 00:53:30.866: INFO: Got logs for pod "busybox-privileged-true-9ce390ec-c4a5-40fb-993d-c18ac63c499c": ""
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:30.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7370" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:31.171: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:11.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
Jun 17 00:53:17.135: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:32.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8462" for this suite.
STEP: Destroying namespace "webhook-8462-markers" for this suite.
... skipping 4 lines ...
• [SLOW TEST:21.699 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:33.123: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
• [SLOW TEST:19.077 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":4,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":5,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:36.287: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 158 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jun 17 00:53:31.901: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 17 00:53:31.901: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9rcj
STEP: Creating a pod to test subpath
Jun 17 00:53:32.047: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9rcj" in namespace "provisioning-1237" to be "Succeeded or Failed"
Jun 17 00:53:32.190: INFO: Pod "pod-subpath-test-inlinevolume-9rcj": Phase="Pending", Reason="", readiness=false. Elapsed: 143.053082ms
Jun 17 00:53:34.335: INFO: Pod "pod-subpath-test-inlinevolume-9rcj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287902004s
Jun 17 00:53:36.479: INFO: Pod "pod-subpath-test-inlinevolume-9rcj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.431618687s
STEP: Saw pod success
Jun 17 00:53:36.479: INFO: Pod "pod-subpath-test-inlinevolume-9rcj" satisfied condition "Succeeded or Failed"
Jun 17 00:53:36.622: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-9rcj container test-container-volume-inlinevolume-9rcj: <nil>
STEP: delete the pod
Jun 17 00:53:36.917: INFO: Waiting for pod pod-subpath-test-inlinevolume-9rcj to disappear
Jun 17 00:53:37.060: INFO: Pod pod-subpath-test-inlinevolume-9rcj no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9rcj
Jun 17 00:53:37.060: INFO: Deleting pod "pod-subpath-test-inlinevolume-9rcj" in namespace "provisioning-1237"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:37.651: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76 lines ...
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7297
STEP: Deleting pod verify-service-up-exec-pod-n4bvz in namespace services-7297
STEP: verifying service-disabled is not up
Jun 17 00:53:09.853: INFO: Creating new host exec pod
Jun 17 00:53:10.141: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 00:53:12.285: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 00:53:12.286: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7297 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.184.41:80 && echo service-down-failed'
Jun 17 00:53:15.773: INFO: rc: 28
Jun 17 00:53:15.773: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.184.41:80 && echo service-down-failed" in pod services-7297/verify-service-down-host-exec-pod: error running /tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7297 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.184.41:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.65.184.41:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7297
STEP: adding service-proxy-name label
STEP: verifying service is not up
Jun 17 00:53:16.211: INFO: Creating new host exec pod
Jun 17 00:53:16.501: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 00:53:18.645: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 00:53:18.645: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7297 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.190.1:80 && echo service-down-failed'
Jun 17 00:53:22.242: INFO: rc: 28
Jun 17 00:53:22.242: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.190.1:80 && echo service-down-failed" in pod services-7297/verify-service-down-host-exec-pod: error running /tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7297 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.190.1:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.65.190.1:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7297
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Jun 17 00:53:22.677: INFO: Creating new host exec pod
... skipping 12 lines ...
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7297
STEP: Deleting pod verify-service-up-exec-pod-g5n5s in namespace services-7297
STEP: verifying service-disabled is still not up
Jun 17 00:53:31.392: INFO: Creating new host exec pod
Jun 17 00:53:31.679: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 00:53:33.823: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 00:53:33.823: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7297 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.184.41:80 && echo service-down-failed'
Jun 17 00:53:37.327: INFO: rc: 28
Jun 17 00:53:37.327: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.184.41:80 && echo service-down-failed" in pod services-7297/verify-service-down-host-exec-pod: error running /tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7297 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.184.41:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.65.184.41:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7297
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:37.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 19 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":5,"skipped":51,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:37.796: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 28 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-0fe731df-619e-4285-92ee-f9c245728e69
STEP: Creating secret with name secret-projected-all-test-volume-fdcb687b-f9d1-4eb1-82e0-b4d78c97a374
STEP: Creating a pod to test Check all projections for projected volume plugin
Jun 17 00:53:37.562: INFO: Waiting up to 5m0s for pod "projected-volume-1ddc9bc5-ee1a-46e5-a0ab-510abbf8e564" in namespace "projected-969" to be "Succeeded or Failed"
Jun 17 00:53:37.706: INFO: Pod "projected-volume-1ddc9bc5-ee1a-46e5-a0ab-510abbf8e564": Phase="Pending", Reason="", readiness=false. Elapsed: 144.441932ms
Jun 17 00:53:39.851: INFO: Pod "projected-volume-1ddc9bc5-ee1a-46e5-a0ab-510abbf8e564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.2894231s
STEP: Saw pod success
Jun 17 00:53:39.851: INFO: Pod "projected-volume-1ddc9bc5-ee1a-46e5-a0ab-510abbf8e564" satisfied condition "Succeeded or Failed"
Jun 17 00:53:39.995: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod projected-volume-1ddc9bc5-ee1a-46e5-a0ab-510abbf8e564 container projected-all-volume-test: <nil>
STEP: delete the pod
Jun 17 00:53:40.290: INFO: Waiting for pod projected-volume-1ddc9bc5-ee1a-46e5-a0ab-510abbf8e564 to disappear
Jun 17 00:53:40.434: INFO: Pod projected-volume-1ddc9bc5-ee1a-46e5-a0ab-510abbf8e564 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:40.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-969" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 61 lines ...
Jun 17 00:53:23.048: INFO: Waiting for pod aws-client to disappear
Jun 17 00:53:23.192: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Jun 17 00:53:23.193: INFO: Deleting PersistentVolumeClaim "pvc-8m58m"
Jun 17 00:53:23.338: INFO: Deleting PersistentVolume "aws-zbj6p"
Jun 17 00:53:23.799: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0eacdf0be4351d319", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0eacdf0be4351d319 is currently attached to i-08a1e274e6efce9f5
	status code: 400, request id: 18040d06-aae2-4f6c-adb9-417b2a65f3be
Jun 17 00:53:29.552: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0eacdf0be4351d319", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0eacdf0be4351d319 is currently attached to i-08a1e274e6efce9f5
	status code: 400, request id: 8de0970d-f33f-446d-99f3-6e5cadae7732
Jun 17 00:53:35.404: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0eacdf0be4351d319", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0eacdf0be4351d319 is currently attached to i-08a1e274e6efce9f5
	status code: 400, request id: 23c5c92e-6002-4825-ba9c-f7e9cfbc2054
Jun 17 00:53:41.249: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0eacdf0be4351d319".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:53:41.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1436" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":25,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:41.737: INFO: Only supported for providers [gce gke] (not aws)
... skipping 74 lines ...
• [SLOW TEST:22.973 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:42.819: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 115 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":48,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
• [SLOW TEST:6.065 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [sig-network] IngressClass API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:45.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingressclass
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jun 17 00:53:43.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3" in namespace "projected-6495" to be "Succeeded or Failed"
Jun 17 00:53:43.891: INFO: Pod "downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3": Phase="Pending", Reason="", readiness=false. Elapsed: 144.100685ms
Jun 17 00:53:46.037: INFO: Pod "downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289713063s
Jun 17 00:53:48.181: INFO: Pod "downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433862841s
STEP: Saw pod success
Jun 17 00:53:48.181: INFO: Pod "downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3" satisfied condition "Succeeded or Failed"
Jun 17 00:53:48.324: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3 container client-container: <nil>
STEP: delete the pod
Jun 17 00:53:48.645: INFO: Waiting for pod downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3 to disappear
Jun 17 00:53:48.788: INFO: Pod downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.198 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:49.087: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
• [SLOW TEST:11.585 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:56.622: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 63 lines ...
• [SLOW TEST:53.221 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:53:58.177: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 41 lines ...
• [SLOW TEST:41.750 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:194
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":4,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Jun 17 00:53:52.121: INFO: PersistentVolumeClaim pvc-9swb2 found but phase is Pending instead of Bound.
Jun 17 00:53:54.266: INFO: PersistentVolumeClaim pvc-9swb2 found and phase=Bound (10.872911606s)
Jun 17 00:53:54.266: INFO: Waiting up to 3m0s for PersistentVolume local-b4vlb to have phase Bound
Jun 17 00:53:54.409: INFO: PersistentVolume local-b4vlb found and phase=Bound (143.066437ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-92r2
STEP: Creating a pod to test subpath
Jun 17 00:53:54.844: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-92r2" in namespace "provisioning-8101" to be "Succeeded or Failed"
Jun 17 00:53:54.994: INFO: Pod "pod-subpath-test-preprovisionedpv-92r2": Phase="Pending", Reason="", readiness=false. Elapsed: 149.63216ms
Jun 17 00:53:57.138: INFO: Pod "pod-subpath-test-preprovisionedpv-92r2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293591564s
Jun 17 00:53:59.293: INFO: Pod "pod-subpath-test-preprovisionedpv-92r2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.448920094s
STEP: Saw pod success
Jun 17 00:53:59.293: INFO: Pod "pod-subpath-test-preprovisionedpv-92r2" satisfied condition "Succeeded or Failed"
Jun 17 00:53:59.437: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-92r2 container test-container-volume-preprovisionedpv-92r2: <nil>
STEP: delete the pod
Jun 17 00:53:59.735: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-92r2 to disappear
Jun 17 00:53:59.878: INFO: Pod pod-subpath-test-preprovisionedpv-92r2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-92r2
Jun 17 00:53:59.878: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-92r2" in namespace "provisioning-8101"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Jun 17 00:53:52.311: INFO: PersistentVolumeClaim pvc-fb6wd found but phase is Pending instead of Bound.
Jun 17 00:53:54.457: INFO: PersistentVolumeClaim pvc-fb6wd found and phase=Bound (15.159781621s)
Jun 17 00:53:54.457: INFO: Waiting up to 3m0s for PersistentVolume local-k8f5d to have phase Bound
Jun 17 00:53:54.601: INFO: PersistentVolume local-k8f5d found and phase=Bound (144.009072ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hbpb
STEP: Creating a pod to test subpath
Jun 17 00:53:55.043: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hbpb" in namespace "provisioning-8612" to be "Succeeded or Failed"
Jun 17 00:53:55.187: INFO: Pod "pod-subpath-test-preprovisionedpv-hbpb": Phase="Pending", Reason="", readiness=false. Elapsed: 144.061842ms
Jun 17 00:53:57.331: INFO: Pod "pod-subpath-test-preprovisionedpv-hbpb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288304488s
Jun 17 00:53:59.477: INFO: Pod "pod-subpath-test-preprovisionedpv-hbpb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434543563s
Jun 17 00:54:01.624: INFO: Pod "pod-subpath-test-preprovisionedpv-hbpb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.581550302s
STEP: Saw pod success
Jun 17 00:54:01.625: INFO: Pod "pod-subpath-test-preprovisionedpv-hbpb" satisfied condition "Succeeded or Failed"
Jun 17 00:54:01.771: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-hbpb container test-container-subpath-preprovisionedpv-hbpb: <nil>
STEP: delete the pod
Jun 17 00:54:02.074: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hbpb to disappear
Jun 17 00:54:02.219: INFO: Pod pod-subpath-test-preprovisionedpv-hbpb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hbpb
Jun 17 00:54:02.219: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hbpb" in namespace "provisioning-8612"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:04.240: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-94f3bb9b-d6f6-4cd4-bfa5-a9e591d1e1bf
STEP: Creating a pod to test consume secrets
Jun 17 00:54:05.892: INFO: Waiting up to 5m0s for pod "pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34" in namespace "secrets-2217" to be "Succeeded or Failed"
Jun 17 00:54:06.037: INFO: Pod "pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34": Phase="Pending", Reason="", readiness=false. Elapsed: 144.528518ms
Jun 17 00:54:08.183: INFO: Pod "pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29077228s
Jun 17 00:54:10.327: INFO: Pod "pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435147307s
STEP: Saw pod success
Jun 17 00:54:10.328: INFO: Pod "pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34" satisfied condition "Succeeded or Failed"
Jun 17 00:54:10.472: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34 container secret-volume-test: <nil>
STEP: delete the pod
Jun 17 00:54:10.776: INFO: Waiting for pod pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34 to disappear
Jun 17 00:54:10.920: INFO: Pod pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
• [SLOW TEST:7.055 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:11.376: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 100 lines ...
Jun 17 00:52:58.960: INFO: PersistentVolumeClaim csi-hostpath5q7wh found but phase is Pending instead of Bound.
Jun 17 00:53:01.105: INFO: PersistentVolumeClaim csi-hostpath5q7wh found but phase is Pending instead of Bound.
Jun 17 00:53:03.250: INFO: PersistentVolumeClaim csi-hostpath5q7wh found but phase is Pending instead of Bound.
Jun 17 00:53:05.395: INFO: PersistentVolumeClaim csi-hostpath5q7wh found and phase=Bound (58.115815183s)
STEP: Creating pod pod-subpath-test-dynamicpv-74lj
STEP: Creating a pod to test atomic-volume-subpath
Jun 17 00:53:05.829: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-74lj" in namespace "provisioning-3992" to be "Succeeded or Failed"
Jun 17 00:53:05.974: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Pending", Reason="", readiness=false. Elapsed: 144.229063ms
Jun 17 00:53:08.118: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288894137s
Jun 17 00:53:10.264: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434772646s
Jun 17 00:53:12.409: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580064285s
Jun 17 00:53:14.554: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725089847s
Jun 17 00:53:16.699: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.869688045s
... skipping 9 lines ...
Jun 17 00:53:38.164: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Running", Reason="", readiness=true. Elapsed: 32.334371796s
Jun 17 00:53:40.311: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Running", Reason="", readiness=true. Elapsed: 34.481343701s
Jun 17 00:53:42.456: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Running", Reason="", readiness=true. Elapsed: 36.626295475s
Jun 17 00:53:44.601: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Running", Reason="", readiness=true. Elapsed: 38.771791557s
Jun 17 00:53:46.747: INFO: Pod "pod-subpath-test-dynamicpv-74lj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.917808652s
STEP: Saw pod success
Jun 17 00:53:46.747: INFO: Pod "pod-subpath-test-dynamicpv-74lj" satisfied condition "Succeeded or Failed"
Jun 17 00:53:46.894: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-74lj container test-container-subpath-dynamicpv-74lj: <nil>
STEP: delete the pod
Jun 17 00:53:47.195: INFO: Waiting for pod pod-subpath-test-dynamicpv-74lj to disappear
Jun 17 00:53:47.339: INFO: Pod pod-subpath-test-dynamicpv-74lj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-74lj
Jun 17 00:53:47.339: INFO: Deleting pod "pod-subpath-test-dynamicpv-74lj" in namespace "provisioning-3992"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:11.546: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 214 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Ingress API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:54:15.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-3911" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Jun 17 00:54:06.582: INFO: PersistentVolumeClaim pvc-mhv6g found but phase is Pending instead of Bound.
Jun 17 00:54:08.727: INFO: PersistentVolumeClaim pvc-mhv6g found and phase=Bound (2.289517642s)
Jun 17 00:54:08.727: INFO: Waiting up to 3m0s for PersistentVolume local-p4sbc to have phase Bound
Jun 17 00:54:08.871: INFO: PersistentVolume local-p4sbc found and phase=Bound (144.017835ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ks4m
STEP: Creating a pod to test subpath
Jun 17 00:54:09.306: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ks4m" in namespace "provisioning-3165" to be "Succeeded or Failed"
Jun 17 00:54:09.451: INFO: Pod "pod-subpath-test-preprovisionedpv-ks4m": Phase="Pending", Reason="", readiness=false. Elapsed: 144.198784ms
Jun 17 00:54:11.596: INFO: Pod "pod-subpath-test-preprovisionedpv-ks4m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289739848s
Jun 17 00:54:13.742: INFO: Pod "pod-subpath-test-preprovisionedpv-ks4m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435576476s
STEP: Saw pod success
Jun 17 00:54:13.742: INFO: Pod "pod-subpath-test-preprovisionedpv-ks4m" satisfied condition "Succeeded or Failed"
Jun 17 00:54:13.886: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-ks4m container test-container-volume-preprovisionedpv-ks4m: <nil>
STEP: delete the pod
Jun 17 00:54:14.190: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ks4m to disappear
Jun 17 00:54:14.335: INFO: Pod pod-subpath-test-preprovisionedpv-ks4m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ks4m
Jun 17 00:54:14.335: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ks4m" in namespace "provisioning-3165"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":43,"failed":0}

SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:47.825: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
Jun 17 00:54:05.509: INFO: PersistentVolumeClaim pvc-bgpwk found but phase is Pending instead of Bound.
Jun 17 00:54:07.653: INFO: PersistentVolumeClaim pvc-bgpwk found and phase=Bound (15.161138205s)
Jun 17 00:54:07.653: INFO: Waiting up to 3m0s for PersistentVolume local-mwmlv to have phase Bound
Jun 17 00:54:07.797: INFO: PersistentVolume local-mwmlv found and phase=Bound (143.738306ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-4xx8
STEP: Creating a pod to test exec-volume-test
Jun 17 00:54:08.230: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-4xx8" in namespace "volume-4635" to be "Succeeded or Failed"
Jun 17 00:54:08.374: INFO: Pod "exec-volume-test-preprovisionedpv-4xx8": Phase="Pending", Reason="", readiness=false. Elapsed: 143.66612ms
Jun 17 00:54:10.519: INFO: Pod "exec-volume-test-preprovisionedpv-4xx8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288562346s
Jun 17 00:54:12.663: INFO: Pod "exec-volume-test-preprovisionedpv-4xx8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432726644s
STEP: Saw pod success
Jun 17 00:54:12.663: INFO: Pod "exec-volume-test-preprovisionedpv-4xx8" satisfied condition "Succeeded or Failed"
Jun 17 00:54:12.807: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-4xx8 container exec-container-preprovisionedpv-4xx8: <nil>
STEP: delete the pod
Jun 17 00:54:13.101: INFO: Waiting for pod exec-volume-test-preprovisionedpv-4xx8 to disappear
Jun 17 00:54:13.248: INFO: Pod exec-volume-test-preprovisionedpv-4xx8 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-4xx8
Jun 17 00:54:13.248: INFO: Deleting pod "exec-volume-test-preprovisionedpv-4xx8" in namespace "volume-4635"
... skipping 156 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 113 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:28.899: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver "csi-hostpath" does not support FsGroup - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":34,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:54:17.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-a81472b9-b0c0-4f32-ae00-f8c693862fa9
STEP: Creating a pod to test consume configMaps
Jun 17 00:54:18.483: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917" in namespace "configmap-3373" to be "Succeeded or Failed"
Jun 17 00:54:18.628: INFO: Pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917": Phase="Pending", Reason="", readiness=false. Elapsed: 144.960044ms
Jun 17 00:54:20.772: INFO: Pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289565401s
Jun 17 00:54:22.918: INFO: Pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435371683s
Jun 17 00:54:25.062: INFO: Pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579304176s
Jun 17 00:54:27.207: INFO: Pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723757401s
Jun 17 00:54:29.352: INFO: Pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868960889s
Jun 17 00:54:31.497: INFO: Pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.014144271s
STEP: Saw pod success
Jun 17 00:54:31.497: INFO: Pod "pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917" satisfied condition "Succeeded or Failed"
Jun 17 00:54:31.641: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917 container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:54:31.938: INFO: Waiting for pod pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917 to disappear
Jun 17 00:54:32.082: INFO: Pod pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.974 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
Jun 17 00:54:30.180: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 17 00:54:30.180: INFO: stdout: "controller-manager scheduler etcd-0 etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of controller-manager
Jun 17 00:54:30.180: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9325 get componentstatuses controller-manager'
Jun 17 00:54:30.721: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 17 00:54:30.721: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of scheduler
Jun 17 00:54:30.722: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9325 get componentstatuses scheduler'
Jun 17 00:54:31.250: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 17 00:54:31.250: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-0
Jun 17 00:54:31.251: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9325 get componentstatuses etcd-0'
Jun 17 00:54:31.813: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 17 00:54:31.813: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-1
Jun 17 00:54:31.813: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9325 get componentstatuses etcd-1'
Jun 17 00:54:32.339: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jun 17 00:54:32.339: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:54:32.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9325" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":7,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:32.662: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:01.293: INFO: >>> kubeConfig: /root/.kube/config
... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:33.465: INFO: Driver "nfs" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":3,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:49.064: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Jun 17 00:53:59.497: INFO: PersistentVolumeClaim nfshrr2d found but phase is Pending instead of Bound.
Jun 17 00:54:01.641: INFO: PersistentVolumeClaim nfshrr2d found but phase is Pending instead of Bound.
Jun 17 00:54:03.785: INFO: PersistentVolumeClaim nfshrr2d found but phase is Pending instead of Bound.
Jun 17 00:54:05.930: INFO: PersistentVolumeClaim nfshrr2d found and phase=Bound (6.577071649s)
STEP: Creating pod pod-subpath-test-dynamicpv-2xld
STEP: Creating a pod to test atomic-volume-subpath
Jun 17 00:54:06.362: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2xld" in namespace "provisioning-71" to be "Succeeded or Failed"
Jun 17 00:54:06.506: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Pending", Reason="", readiness=false. Elapsed: 143.564748ms
Jun 17 00:54:08.649: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287345444s
Jun 17 00:54:10.794: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Running", Reason="", readiness=true. Elapsed: 4.432393948s
Jun 17 00:54:12.939: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Running", Reason="", readiness=true. Elapsed: 6.57737918s
Jun 17 00:54:15.084: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Running", Reason="", readiness=true. Elapsed: 8.721877575s
Jun 17 00:54:17.228: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Running", Reason="", readiness=true. Elapsed: 10.865667546s
... skipping 2 lines ...
Jun 17 00:54:23.661: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Running", Reason="", readiness=true. Elapsed: 17.299450548s
Jun 17 00:54:25.806: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Running", Reason="", readiness=true. Elapsed: 19.444285518s
Jun 17 00:54:27.950: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Running", Reason="", readiness=true. Elapsed: 21.588339169s
Jun 17 00:54:30.095: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Running", Reason="", readiness=true. Elapsed: 23.732761246s
Jun 17 00:54:32.239: INFO: Pod "pod-subpath-test-dynamicpv-2xld": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.877027187s
STEP: Saw pod success
Jun 17 00:54:32.239: INFO: Pod "pod-subpath-test-dynamicpv-2xld" satisfied condition "Succeeded or Failed"
Jun 17 00:54:32.383: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-2xld container test-container-subpath-dynamicpv-2xld: <nil>
STEP: delete the pod
Jun 17 00:54:32.679: INFO: Waiting for pod pod-subpath-test-dynamicpv-2xld to disappear
Jun 17 00:54:32.823: INFO: Pod pod-subpath-test-dynamicpv-2xld no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-2xld
Jun 17 00:54:32.823: INFO: Deleting pod "pod-subpath-test-dynamicpv-2xld" in namespace "provisioning-71"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:36.436: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 109 lines ...
Jun 17 00:53:49.819: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5679nl8rm
STEP: creating a claim
Jun 17 00:53:49.963: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-xk4n
STEP: Creating a pod to test subpath
Jun 17 00:53:50.396: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xk4n" in namespace "provisioning-5679" to be "Succeeded or Failed"
Jun 17 00:53:50.549: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 152.665517ms
Jun 17 00:53:52.693: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296274557s
Jun 17 00:53:54.838: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441065129s
Jun 17 00:53:56.981: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584309042s
Jun 17 00:53:59.125: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728328396s
Jun 17 00:54:01.269: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872455559s
... skipping 2 lines ...
Jun 17 00:54:07.702: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 17.305943s
Jun 17 00:54:09.847: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 19.450539555s
Jun 17 00:54:11.998: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 21.601138642s
Jun 17 00:54:14.142: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Pending", Reason="", readiness=false. Elapsed: 23.745537204s
Jun 17 00:54:16.287: INFO: Pod "pod-subpath-test-dynamicpv-xk4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.890727408s
STEP: Saw pod success
Jun 17 00:54:16.287: INFO: Pod "pod-subpath-test-dynamicpv-xk4n" satisfied condition "Succeeded or Failed"
Jun 17 00:54:16.430: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-xk4n container test-container-subpath-dynamicpv-xk4n: <nil>
STEP: delete the pod
Jun 17 00:54:16.733: INFO: Waiting for pod pod-subpath-test-dynamicpv-xk4n to disappear
Jun 17 00:54:16.877: INFO: Pod pod-subpath-test-dynamicpv-xk4n no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xk4n
Jun 17 00:54:16.877: INFO: Deleting pod "pod-subpath-test-dynamicpv-xk4n" in namespace "provisioning-5679"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":36,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:38.795: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
• [SLOW TEST:22.474 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":6,"skipped":54,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 82 lines ...
Jun 17 00:53:50.236: INFO: PersistentVolumeClaim csi-hostpathh5q24 found but phase is Pending instead of Bound.
Jun 17 00:53:52.384: INFO: PersistentVolumeClaim csi-hostpathh5q24 found but phase is Pending instead of Bound.
Jun 17 00:53:54.530: INFO: PersistentVolumeClaim csi-hostpathh5q24 found but phase is Pending instead of Bound.
Jun 17 00:53:56.675: INFO: PersistentVolumeClaim csi-hostpathh5q24 found and phase=Bound (1m10.972272832s)
STEP: Creating pod pod-subpath-test-dynamicpv-n5db
STEP: Creating a pod to test subpath
Jun 17 00:53:57.110: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-n5db" in namespace "provisioning-8375" to be "Succeeded or Failed"
Jun 17 00:53:57.254: INFO: Pod "pod-subpath-test-dynamicpv-n5db": Phase="Pending", Reason="", readiness=false. Elapsed: 144.574065ms
Jun 17 00:53:59.401: INFO: Pod "pod-subpath-test-dynamicpv-n5db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290801648s
Jun 17 00:54:01.546: INFO: Pod "pod-subpath-test-dynamicpv-n5db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436048365s
Jun 17 00:54:03.691: INFO: Pod "pod-subpath-test-dynamicpv-n5db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581077059s
Jun 17 00:54:05.840: INFO: Pod "pod-subpath-test-dynamicpv-n5db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.730108268s
Jun 17 00:54:07.985: INFO: Pod "pod-subpath-test-dynamicpv-n5db": Phase="Pending", Reason="", readiness=false. Elapsed: 10.875281945s
Jun 17 00:54:10.131: INFO: Pod "pod-subpath-test-dynamicpv-n5db": Phase="Pending", Reason="", readiness=false. Elapsed: 13.020702292s
Jun 17 00:54:12.276: INFO: Pod "pod-subpath-test-dynamicpv-n5db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.166217659s
STEP: Saw pod success
Jun 17 00:54:12.276: INFO: Pod "pod-subpath-test-dynamicpv-n5db" satisfied condition "Succeeded or Failed"
Jun 17 00:54:12.421: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-n5db container test-container-subpath-dynamicpv-n5db: <nil>
STEP: delete the pod
Jun 17 00:54:12.722: INFO: Waiting for pod pod-subpath-test-dynamicpv-n5db to disappear
Jun 17 00:54:12.867: INFO: Pod pod-subpath-test-dynamicpv-n5db no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-n5db
Jun 17 00:54:12.867: INFO: Deleting pod "pod-subpath-test-dynamicpv-n5db" in namespace "provisioning-8375"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:41.103: INFO: Only supported for providers [azure] (not aws)
... skipping 80 lines ...
• [SLOW TEST:14.538 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:48.056: INFO: Driver local doesn't support ext3 -- skipping
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:54:49.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-119" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":4,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:49.903: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:50.315: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:50.449: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 59 lines ...
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:54:50.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-10c3bcd3-9f49-4591-a638-003bbe14195e
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:54:51.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-385" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:70.929 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":80,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:51.716: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 145 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:54.606: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
Jun 17 00:54:17.763: INFO: PersistentVolume nfs-6cbr5 found and phase=Bound (143.246456ms)
Jun 17 00:54:17.906: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-56md6] to have phase Bound
Jun 17 00:54:18.049: INFO: PersistentVolumeClaim pvc-56md6 found and phase=Bound (143.147123ms)
STEP: Checking pod has write access to PersistentVolumes
Jun 17 00:54:18.294: INFO: Creating nfs test pod
Jun 17 00:54:18.460: INFO: Pod should terminate with exitcode 0 (success)
Jun 17 00:54:18.460: INFO: Waiting up to 5m0s for pod "pvc-tester-qh2pz" in namespace "pv-3021" to be "Succeeded or Failed"
Jun 17 00:54:18.603: INFO: Pod "pvc-tester-qh2pz": Phase="Pending", Reason="", readiness=false. Elapsed: 143.033542ms
Jun 17 00:54:20.747: INFO: Pod "pvc-tester-qh2pz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287334334s
Jun 17 00:54:22.893: INFO: Pod "pvc-tester-qh2pz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432629001s
Jun 17 00:54:25.037: INFO: Pod "pvc-tester-qh2pz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577080001s
STEP: Saw pod success
Jun 17 00:54:25.037: INFO: Pod "pvc-tester-qh2pz" satisfied condition "Succeeded or Failed"
Jun 17 00:54:25.037: INFO: Pod pvc-tester-qh2pz succeeded 
Jun 17 00:54:25.037: INFO: Deleting pod "pvc-tester-qh2pz" in namespace "pv-3021"
Jun 17 00:54:25.186: INFO: Wait up to 5m0s for pod "pvc-tester-qh2pz" to be fully deleted
Jun 17 00:54:25.472: INFO: Creating nfs test pod
Jun 17 00:54:25.617: INFO: Pod should terminate with exitcode 0 (success)
Jun 17 00:54:25.617: INFO: Waiting up to 5m0s for pod "pvc-tester-s9cl9" in namespace "pv-3021" to be "Succeeded or Failed"
Jun 17 00:54:25.760: INFO: Pod "pvc-tester-s9cl9": Phase="Pending", Reason="", readiness=false. Elapsed: 142.994981ms
Jun 17 00:54:27.904: INFO: Pod "pvc-tester-s9cl9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286805614s
Jun 17 00:54:30.056: INFO: Pod "pvc-tester-s9cl9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438623531s
Jun 17 00:54:32.200: INFO: Pod "pvc-tester-s9cl9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583057017s
Jun 17 00:54:34.346: INFO: Pod "pvc-tester-s9cl9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.729314102s
STEP: Saw pod success
Jun 17 00:54:34.347: INFO: Pod "pvc-tester-s9cl9" satisfied condition "Succeeded or Failed"
Jun 17 00:54:34.347: INFO: Pod pvc-tester-s9cl9 succeeded 
Jun 17 00:54:34.347: INFO: Deleting pod "pvc-tester-s9cl9" in namespace "pv-3021"
Jun 17 00:54:34.494: INFO: Wait up to 5m0s for pod "pvc-tester-s9cl9" to be fully deleted
Jun 17 00:54:34.781: INFO: Creating nfs test pod
Jun 17 00:54:34.934: INFO: Pod should terminate with exitcode 0 (success)
Jun 17 00:54:34.934: INFO: Waiting up to 5m0s for pod "pvc-tester-t76gk" in namespace "pv-3021" to be "Succeeded or Failed"
Jun 17 00:54:35.077: INFO: Pod "pvc-tester-t76gk": Phase="Pending", Reason="", readiness=false. Elapsed: 143.08667ms
Jun 17 00:54:37.221: INFO: Pod "pvc-tester-t76gk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287540796s
Jun 17 00:54:39.368: INFO: Pod "pvc-tester-t76gk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434167621s
Jun 17 00:54:41.512: INFO: Pod "pvc-tester-t76gk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577904526s
Jun 17 00:54:43.656: INFO: Pod "pvc-tester-t76gk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.722525827s
STEP: Saw pod success
Jun 17 00:54:43.657: INFO: Pod "pvc-tester-t76gk" satisfied condition "Succeeded or Failed"
Jun 17 00:54:43.657: INFO: Pod pvc-tester-t76gk succeeded 
Jun 17 00:54:43.657: INFO: Deleting pod "pvc-tester-t76gk" in namespace "pv-3021"
Jun 17 00:54:43.804: INFO: Wait up to 5m0s for pod "pvc-tester-t76gk" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Jun 17 00:54:44.233: INFO: Deleting PVC pvc-g2lv6 to trigger reclamation of PV nfs-hrdr9
Jun 17 00:54:44.234: INFO: Deleting PersistentVolumeClaim "pvc-g2lv6"
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":6,"skipped":51,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:54:55.321: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 170 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":7,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:00.265: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:01.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1653" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":8,"skipped":71,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":7,"skipped":46,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:01.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:02.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4652" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":8,"skipped":46,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
Jun 17 00:54:13.677: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-xnvzw] to have phase Bound
Jun 17 00:54:13.822: INFO: PersistentVolumeClaim pvc-xnvzw found and phase=Bound (144.291493ms)
STEP: Deleting the previously created pod
Jun 17 00:54:32.549: INFO: Deleting pod "pvc-volume-tester-mcj6p" in namespace "csi-mock-volumes-3406"
Jun 17 00:54:32.694: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mcj6p" to be fully deleted
STEP: Checking CSI driver logs
Jun 17 00:54:39.130: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/739a2922-089d-4ea7-87ab-4e775a3b3415/volumes/kubernetes.io~csi/pvc-77ce4f64-de42-4328-82fc-70a309bb0ea7/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-mcj6p
Jun 17 00:54:39.131: INFO: Deleting pod "pvc-volume-tester-mcj6p" in namespace "csi-mock-volumes-3406"
STEP: Deleting claim pvc-xnvzw
Jun 17 00:54:39.564: INFO: Waiting up to 2m0s for PersistentVolume pvc-77ce4f64-de42-4328-82fc-70a309bb0ea7 to get deleted
Jun 17 00:54:39.708: INFO: PersistentVolume pvc-77ce4f64-de42-4328-82fc-70a309bb0ea7 was removed
STEP: Deleting storageclass csi-mock-volumes-3406-sc8mzjk
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":3,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:03.271: INFO: Driver local doesn't support ext4 -- skipping
... skipping 65 lines ...
Jun 17 00:54:36.713: INFO: PersistentVolumeClaim pvc-q8pwg found but phase is Pending instead of Bound.
Jun 17 00:54:38.862: INFO: PersistentVolumeClaim pvc-q8pwg found and phase=Bound (15.164956302s)
Jun 17 00:54:38.862: INFO: Waiting up to 3m0s for PersistentVolume aws-5d9wx to have phase Bound
Jun 17 00:54:39.006: INFO: PersistentVolume aws-5d9wx found and phase=Bound (144.266716ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-x5bh
STEP: Creating a pod to test exec-volume-test
Jun 17 00:54:39.442: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-x5bh" in namespace "volume-2004" to be "Succeeded or Failed"
Jun 17 00:54:39.589: INFO: Pod "exec-volume-test-preprovisionedpv-x5bh": Phase="Pending", Reason="", readiness=false. Elapsed: 147.172479ms
Jun 17 00:54:41.734: INFO: Pod "exec-volume-test-preprovisionedpv-x5bh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291735625s
Jun 17 00:54:43.880: INFO: Pod "exec-volume-test-preprovisionedpv-x5bh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438283144s
Jun 17 00:54:46.025: INFO: Pod "exec-volume-test-preprovisionedpv-x5bh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583312919s
Jun 17 00:54:48.170: INFO: Pod "exec-volume-test-preprovisionedpv-x5bh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728484026s
Jun 17 00:54:50.315: INFO: Pod "exec-volume-test-preprovisionedpv-x5bh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.873293651s
STEP: Saw pod success
Jun 17 00:54:50.315: INFO: Pod "exec-volume-test-preprovisionedpv-x5bh" satisfied condition "Succeeded or Failed"
Jun 17 00:54:50.460: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-x5bh container exec-container-preprovisionedpv-x5bh: <nil>
STEP: delete the pod
Jun 17 00:54:50.761: INFO: Waiting for pod exec-volume-test-preprovisionedpv-x5bh to disappear
Jun 17 00:54:50.905: INFO: Pod exec-volume-test-preprovisionedpv-x5bh no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-x5bh
Jun 17 00:54:50.905: INFO: Deleting pod "exec-volume-test-preprovisionedpv-x5bh" in namespace "volume-2004"
STEP: Deleting pv and pvc
Jun 17 00:54:51.050: INFO: Deleting PersistentVolumeClaim "pvc-q8pwg"
Jun 17 00:54:51.196: INFO: Deleting PersistentVolume "aws-5d9wx"
Jun 17 00:54:51.641: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f6b6cb3df320b791", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f6b6cb3df320b791 is currently attached to i-08a1e274e6efce9f5
	status code: 400, request id: 3b92f7e3-6b4e-4006-a0ec-6061e7145879
Jun 17 00:54:57.445: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f6b6cb3df320b791", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f6b6cb3df320b791 is currently attached to i-08a1e274e6efce9f5
	status code: 400, request id: 11ee3eec-f9ad-4614-9a72-8cc4fce8f5ce
Jun 17 00:55:03.208: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0f6b6cb3df320b791".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:03.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2004" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":10,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Jun 17 00:54:51.931: INFO: PersistentVolumeClaim pvc-t76xr found but phase is Pending instead of Bound.
Jun 17 00:54:54.076: INFO: PersistentVolumeClaim pvc-t76xr found and phase=Bound (6.579353733s)
Jun 17 00:54:54.076: INFO: Waiting up to 3m0s for PersistentVolume local-m4q8s to have phase Bound
Jun 17 00:54:54.220: INFO: PersistentVolume local-m4q8s found and phase=Bound (143.916886ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vlvl
STEP: Creating a pod to test subpath
Jun 17 00:54:54.654: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vlvl" in namespace "provisioning-8441" to be "Succeeded or Failed"
Jun 17 00:54:54.799: INFO: Pod "pod-subpath-test-preprovisionedpv-vlvl": Phase="Pending", Reason="", readiness=false. Elapsed: 144.297256ms
Jun 17 00:54:56.945: INFO: Pod "pod-subpath-test-preprovisionedpv-vlvl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290390528s
Jun 17 00:54:59.091: INFO: Pod "pod-subpath-test-preprovisionedpv-vlvl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436598651s
Jun 17 00:55:01.236: INFO: Pod "pod-subpath-test-preprovisionedpv-vlvl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581825876s
Jun 17 00:55:03.383: INFO: Pod "pod-subpath-test-preprovisionedpv-vlvl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.728637854s
STEP: Saw pod success
Jun 17 00:55:03.383: INFO: Pod "pod-subpath-test-preprovisionedpv-vlvl" satisfied condition "Succeeded or Failed"
Jun 17 00:55:03.528: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-vlvl container test-container-volume-preprovisionedpv-vlvl: <nil>
STEP: delete the pod
Jun 17 00:55:03.825: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vlvl to disappear
Jun 17 00:55:03.969: INFO: Pod pod-subpath-test-preprovisionedpv-vlvl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vlvl
Jun 17 00:55:03.969: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vlvl" in namespace "provisioning-8441"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:05.964: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 97 lines ...
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jun 17 00:54:58.841: INFO: start=2021-06-17 00:54:53.675042937 +0000 UTC m=+187.513288627, now=2021-06-17 00:54:58.841450533 +0000 UTC m=+192.679696233, kubelet pod: {"metadata":{"name":"pod-submit-remove-5651c7c5-7b73-4cdb-a3aa-90edc9604f0b","namespace":"pods-1062","uid":"dee40232-9337-4c34-98fc-a11beaa15c61","resourceVersion":"7550","creationTimestamp":"2021-06-17T00:54:50Z","deletionTimestamp":"2021-06-17T00:55:23Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"647058413"},"annotations":{"kubernetes.io/config.seen":"2021-06-17T00:54:50.882083811Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-06-17T00:54:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-bbvck","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-bbvck","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-46-228.sa-east-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-06-17T00:54:50Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-06-17T00:54:55Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-06-17T00:54:55Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-06-17T00:54:50Z"}],"hostIP":"172.20.46.228","podIP":"100.96.1.59","podIPs":[{"ip":"100.96.1.59"}],"startTime":"2021-06-17T00:54:50Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-06-17T00:54:51Z","finishedAt":"2021-06-17T00:54:54Z","containerID":"docker://c731e875acacc3f8e78c1edf0674459f00d4a105f0239b8a59ef142f82b55ce6"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://c731e875acacc3f8e78c1edf0674459f00d4a105f0239b8a59ef142f82b55ce6","started":false}],"qosClass":"BestEffort"}}
Jun 17 00:55:03.828: INFO: start=2021-06-17 00:54:53.675042937 +0000 UTC m=+187.513288627, now=2021-06-17 00:55:03.828874931 +0000 UTC m=+197.667120675, kubelet pod: {"metadata":{"name":"pod-submit-remove-5651c7c5-7b73-4cdb-a3aa-90edc9604f0b","namespace":"pods-1062","uid":"dee40232-9337-4c34-98fc-a11beaa15c61","resourceVersion":"7550","creationTimestamp":"2021-06-17T00:54:50Z","deletionTimestamp":"2021-06-17T00:55:23Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"647058413"},"annotations":{"kubernetes.io/config.seen":"2021-06-17T00:54:50.882083811Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-06-17T00:54:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-bbvck","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-bbvck","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-46-228.sa-east-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-06-17T00:54:50Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-06-17T00:54:55Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-06-17T00:54:55Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-06-17T00:54:50Z"}],"hostIP":"172.20.46.228","podIP":"100.96.1.59","podIPs":[{"ip":"100.96.1.59"}],"startTime":"2021-06-17T00:54:50Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-06-17T00:54:51Z","finishedAt":"2021-06-17T00:54:54Z","containerID":"docker://c731e875acacc3f8e78c1edf0674459f00d4a105f0239b8a59ef142f82b55ce6"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://c731e875acacc3f8e78c1edf0674459f00d4a105f0239b8a59ef142f82b55ce6","started":false}],"qosClass":"BestEffort"}}
Jun 17 00:55:08.825: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:08.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1062" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":5,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:09.287: INFO: Only supported for providers [openstack] (not aws)
... skipping 28 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jun 17 00:54:55.345: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 00:54:55.489: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-mgc2
STEP: Creating a pod to test subpath
Jun 17 00:54:55.637: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-mgc2" in namespace "provisioning-5128" to be "Succeeded or Failed"
Jun 17 00:54:55.781: INFO: Pod "pod-subpath-test-inlinevolume-mgc2": Phase="Pending", Reason="", readiness=false. Elapsed: 143.555846ms
Jun 17 00:54:57.926: INFO: Pod "pod-subpath-test-inlinevolume-mgc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288618185s
Jun 17 00:55:00.076: INFO: Pod "pod-subpath-test-inlinevolume-mgc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43817883s
Jun 17 00:55:02.233: INFO: Pod "pod-subpath-test-inlinevolume-mgc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.59536549s
Jun 17 00:55:04.379: INFO: Pod "pod-subpath-test-inlinevolume-mgc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.741086815s
Jun 17 00:55:06.524: INFO: Pod "pod-subpath-test-inlinevolume-mgc2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.886816732s
Jun 17 00:55:08.671: INFO: Pod "pod-subpath-test-inlinevolume-mgc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.033175829s
STEP: Saw pod success
Jun 17 00:55:08.671: INFO: Pod "pod-subpath-test-inlinevolume-mgc2" satisfied condition "Succeeded or Failed"
Jun 17 00:55:08.815: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-mgc2 container test-container-volume-inlinevolume-mgc2: <nil>
STEP: delete the pod
Jun 17 00:55:09.110: INFO: Waiting for pod pod-subpath-test-inlinevolume-mgc2 to disappear
Jun 17 00:55:09.254: INFO: Pod pod-subpath-test-inlinevolume-mgc2 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-mgc2
Jun 17 00:55:09.254: INFO: Deleting pod "pod-subpath-test-inlinevolume-mgc2" in namespace "provisioning-5128"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:12.073 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":9,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:13.483: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":3,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:14.630: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 193 lines ...
Jun 17 00:54:52.256: INFO: PersistentVolumeClaim pvc-9tlt8 found but phase is Pending instead of Bound.
Jun 17 00:54:54.403: INFO: PersistentVolumeClaim pvc-9tlt8 found and phase=Bound (4.436247315s)
Jun 17 00:54:54.403: INFO: Waiting up to 3m0s for PersistentVolume local-4ng47 to have phase Bound
Jun 17 00:54:54.547: INFO: PersistentVolume local-4ng47 found and phase=Bound (143.560661ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rdgs
STEP: Creating a pod to test subpath
Jun 17 00:54:54.992: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rdgs" in namespace "provisioning-2399" to be "Succeeded or Failed"
Jun 17 00:54:55.136: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs": Phase="Pending", Reason="", readiness=false. Elapsed: 143.523811ms
Jun 17 00:54:57.280: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287574699s
Jun 17 00:54:59.425: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432719987s
Jun 17 00:55:01.576: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583377251s
Jun 17 00:55:03.722: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.730155252s
Jun 17 00:55:05.868: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.875601962s
Jun 17 00:55:08.012: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs": Phase="Pending", Reason="", readiness=false. Elapsed: 13.019941341s
Jun 17 00:55:10.167: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.174922287s
STEP: Saw pod success
Jun 17 00:55:10.167: INFO: Pod "pod-subpath-test-preprovisionedpv-rdgs" satisfied condition "Succeeded or Failed"
Jun 17 00:55:10.311: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-rdgs container test-container-subpath-preprovisionedpv-rdgs: <nil>
STEP: delete the pod
Jun 17 00:55:10.607: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rdgs to disappear
Jun 17 00:55:10.751: INFO: Pod pod-subpath-test-preprovisionedpv-rdgs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rdgs
Jun 17 00:55:10.751: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rdgs" in namespace "provisioning-2399"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":37,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:14.893: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:33.805 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:14.995: INFO: Only supported for providers [openstack] (not aws)
... skipping 73 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jun 17 00:55:04.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136" in namespace "projected-6170" to be "Succeeded or Failed"
Jun 17 00:55:04.316: INFO: Pod "downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136": Phase="Pending", Reason="", readiness=false. Elapsed: 144.355633ms
Jun 17 00:55:06.461: INFO: Pod "downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289794092s
Jun 17 00:55:08.608: INFO: Pod "downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4365674s
Jun 17 00:55:10.753: INFO: Pod "downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581694594s
Jun 17 00:55:12.898: INFO: Pod "downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726566915s
Jun 17 00:55:15.043: INFO: Pod "downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.871918294s
STEP: Saw pod success
Jun 17 00:55:15.043: INFO: Pod "downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136" satisfied condition "Succeeded or Failed"
Jun 17 00:55:15.188: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136 container client-container: <nil>
STEP: delete the pod
Jun 17 00:55:15.483: INFO: Waiting for pod downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136 to disappear
Jun 17 00:55:15.632: INFO: Pod downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:15.940: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 49 lines ...
Jun 17 00:54:55.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 17 00:54:56.223: INFO: Waiting up to 5m0s for pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289" in namespace "emptydir-8338" to be "Succeeded or Failed"
Jun 17 00:54:56.366: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 143.036603ms
Jun 17 00:54:58.510: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287463351s
Jun 17 00:55:00.654: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431139008s
Jun 17 00:55:02.798: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575029689s
Jun 17 00:55:04.943: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 8.720103751s
Jun 17 00:55:07.087: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 10.864051075s
Jun 17 00:55:09.231: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 13.008432693s
Jun 17 00:55:11.376: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 15.152989661s
Jun 17 00:55:13.522: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Pending", Reason="", readiness=false. Elapsed: 17.298870232s
Jun 17 00:55:15.666: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.442680769s
STEP: Saw pod success
Jun 17 00:55:15.666: INFO: Pod "pod-0f13a2b5-2a86-460e-a046-0dfec7f82289" satisfied condition "Succeeded or Failed"
Jun 17 00:55:15.809: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-0f13a2b5-2a86-460e-a046-0dfec7f82289 container test-container: <nil>
STEP: delete the pod
Jun 17 00:55:16.110: INFO: Waiting for pod pod-0f13a2b5-2a86-460e-a046-0dfec7f82289 to disappear
Jun 17 00:55:16.253: INFO: Pod pod-0f13a2b5-2a86-460e-a046-0dfec7f82289 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:17.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7615" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:17.386: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":4,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:20.665: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-96fcb71b-aa68-4c06-aa29-86980eb09f87
STEP: Creating a pod to test consume configMaps
Jun 17 00:55:21.701: INFO: Waiting up to 5m0s for pod "pod-configmaps-74bc6aba-ea9d-409c-8d17-6528c01b4c36" in namespace "configmap-6403" to be "Succeeded or Failed"
Jun 17 00:55:21.846: INFO: Pod "pod-configmaps-74bc6aba-ea9d-409c-8d17-6528c01b4c36": Phase="Pending", Reason="", readiness=false. Elapsed: 145.360471ms
Jun 17 00:55:23.994: INFO: Pod "pod-configmaps-74bc6aba-ea9d-409c-8d17-6528c01b4c36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.292787264s
STEP: Saw pod success
Jun 17 00:55:23.994: INFO: Pod "pod-configmaps-74bc6aba-ea9d-409c-8d17-6528c01b4c36" satisfied condition "Succeeded or Failed"
Jun 17 00:55:24.138: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-configmaps-74bc6aba-ea9d-409c-8d17-6528c01b4c36 container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:55:24.434: INFO: Waiting for pod pod-configmaps-74bc6aba-ea9d-409c-8d17-6528c01b4c36 to disappear
Jun 17 00:55:24.578: INFO: Pod pod-configmaps-74bc6aba-ea9d-409c-8d17-6528c01b4c36 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:24.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6403" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:24.900: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 148 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:25.347: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 103 lines ...
Jun 17 00:55:17.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 00:55:19.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 00:55:21.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63759488104, loc:(*time.Location)(0x9dde5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 17 00:55:24.477: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:25.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6166" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:23.621 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":9,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:26.549: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":60,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:26.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun 17 00:55:27.438: INFO: Waiting up to 5m0s for pod "security-context-589a20b5-cb85-4150-964f-d04bbbb0ef0c" in namespace "security-context-7979" to be "Succeeded or Failed"
Jun 17 00:55:27.581: INFO: Pod "security-context-589a20b5-cb85-4150-964f-d04bbbb0ef0c": Phase="Pending", Reason="", readiness=false. Elapsed: 142.908822ms
Jun 17 00:55:29.725: INFO: Pod "security-context-589a20b5-cb85-4150-964f-d04bbbb0ef0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.28695146s
STEP: Saw pod success
Jun 17 00:55:29.725: INFO: Pod "security-context-589a20b5-cb85-4150-964f-d04bbbb0ef0c" satisfied condition "Succeeded or Failed"
Jun 17 00:55:29.868: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod security-context-589a20b5-cb85-4150-964f-d04bbbb0ef0c container test-container: <nil>
STEP: delete the pod
Jun 17 00:55:30.160: INFO: Waiting for pod security-context-589a20b5-cb85-4150-964f-d04bbbb0ef0c to disappear
Jun 17 00:55:30.303: INFO: Pod security-context-589a20b5-cb85-4150-964f-d04bbbb0ef0c no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:30.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-7979" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":10,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:30.611: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:53:41.856: INFO: >>> kubeConfig: /root/.kube/config
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:34.089: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 133 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:36.781: INFO: Only supported for providers [openstack] (not aws)
... skipping 23 lines ...
Jun 17 00:55:34.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jun 17 00:55:35.122: INFO: Waiting up to 5m0s for pod "downward-api-f5901917-3c8b-433d-930c-82c0fd142a68" in namespace "downward-api-1523" to be "Succeeded or Failed"
Jun 17 00:55:35.267: INFO: Pod "downward-api-f5901917-3c8b-433d-930c-82c0fd142a68": Phase="Pending", Reason="", readiness=false. Elapsed: 145.093723ms
Jun 17 00:55:37.419: INFO: Pod "downward-api-f5901917-3c8b-433d-930c-82c0fd142a68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.297669229s
STEP: Saw pod success
Jun 17 00:55:37.420: INFO: Pod "downward-api-f5901917-3c8b-433d-930c-82c0fd142a68" satisfied condition "Succeeded or Failed"
Jun 17 00:55:37.564: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod downward-api-f5901917-3c8b-433d-930c-82c0fd142a68 container dapi-container: <nil>
STEP: delete the pod
Jun 17 00:55:37.861: INFO: Waiting for pod downward-api-f5901917-3c8b-433d-930c-82c0fd142a68 to disappear
Jun 17 00:55:38.006: INFO: Pod downward-api-f5901917-3c8b-433d-930c-82c0fd142a68 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:38.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1523" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:38.308: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
Jun 17 00:52:07.483: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5808
Jun 17 00:52:07.634: INFO: creating *v1.StatefulSet: csi-mock-volumes-5808-6397/csi-mockplugin-attacher
Jun 17 00:52:07.779: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5808"
Jun 17 00:52:07.922: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5808 to register on node ip-172-20-55-34.sa-east-1.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Jun 17 00:54:53.883: INFO: Error getting logs for pod inline-volume-fcnk7: the server rejected our request for an unknown reason (get pods inline-volume-fcnk7)
Jun 17 00:54:54.028: INFO: Deleting pod "inline-volume-fcnk7" in namespace "csi-mock-volumes-5808"
Jun 17 00:54:54.172: INFO: Wait up to 5m0s for pod "inline-volume-fcnk7" to be fully deleted
STEP: Deleting the previously created pod
Jun 17 00:55:06.460: INFO: Deleting pod "pvc-volume-tester-5dvfj" in namespace "csi-mock-volumes-5808"
Jun 17 00:55:06.606: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5dvfj" to be fully deleted
STEP: Checking CSI driver logs
Jun 17 00:55:17.043: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-5dvfj
Jun 17 00:55:17.043: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-5808
Jun 17 00:55:17.043: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: d4e16555-017e-4811-8525-c618ad2d3de7
Jun 17 00:55:17.043: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jun 17 00:55:17.043: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Jun 17 00:55:17.043: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-acf3d02f13da0a57a936f6ea35179669f086749fe82a43520471c65a48e8b1a8","target_path":"/var/lib/kubelet/pods/d4e16555-017e-4811-8525-c618ad2d3de7/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-5dvfj
Jun 17 00:55:17.043: INFO: Deleting pod "pvc-volume-tester-5dvfj" in namespace "csi-mock-volumes-5808"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-5808
STEP: Waiting for namespaces [csi-mock-volumes-5808] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:40.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-d2b46ae9-dd61-4bc3-b242-63e6d6e86916
STEP: Creating a pod to test consume configMaps
Jun 17 00:55:41.589: INFO: Waiting up to 5m0s for pod "pod-configmaps-164152a9-5e4a-4ff0-b9e1-8b15b86d3b14" in namespace "configmap-1006" to be "Succeeded or Failed"
Jun 17 00:55:41.733: INFO: Pod "pod-configmaps-164152a9-5e4a-4ff0-b9e1-8b15b86d3b14": Phase="Pending", Reason="", readiness=false. Elapsed: 143.188502ms
Jun 17 00:55:43.877: INFO: Pod "pod-configmaps-164152a9-5e4a-4ff0-b9e1-8b15b86d3b14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287459275s
STEP: Saw pod success
Jun 17 00:55:43.877: INFO: Pod "pod-configmaps-164152a9-5e4a-4ff0-b9e1-8b15b86d3b14" satisfied condition "Succeeded or Failed"
Jun 17 00:55:44.020: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-configmaps-164152a9-5e4a-4ff0-b9e1-8b15b86d3b14 container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:55:44.314: INFO: Waiting for pod pod-configmaps-164152a9-5e4a-4ff0-b9e1-8b15b86d3b14 to disappear
Jun 17 00:55:44.457: INFO: Pod pod-configmaps-164152a9-5e4a-4ff0-b9e1-8b15b86d3b14 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:44.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1006" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:44.786: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:46.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-5545" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":5,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 26 lines ...
Jun 17 00:55:36.911: INFO: PersistentVolumeClaim pvc-mrd6k found but phase is Pending instead of Bound.
Jun 17 00:55:39.056: INFO: PersistentVolumeClaim pvc-mrd6k found and phase=Bound (15.172191358s)
Jun 17 00:55:39.056: INFO: Waiting up to 3m0s for PersistentVolume local-zkxxx to have phase Bound
Jun 17 00:55:39.201: INFO: PersistentVolume local-zkxxx found and phase=Bound (144.623737ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hkhh
STEP: Creating a pod to test subpath
Jun 17 00:55:39.636: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hkhh" in namespace "provisioning-2119" to be "Succeeded or Failed"
Jun 17 00:55:39.780: INFO: Pod "pod-subpath-test-preprovisionedpv-hkhh": Phase="Pending", Reason="", readiness=false. Elapsed: 144.266368ms
Jun 17 00:55:41.926: INFO: Pod "pod-subpath-test-preprovisionedpv-hkhh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289988097s
Jun 17 00:55:44.072: INFO: Pod "pod-subpath-test-preprovisionedpv-hkhh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43565203s
Jun 17 00:55:46.217: INFO: Pod "pod-subpath-test-preprovisionedpv-hkhh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.581011787s
STEP: Saw pod success
Jun 17 00:55:46.217: INFO: Pod "pod-subpath-test-preprovisionedpv-hkhh" satisfied condition "Succeeded or Failed"
Jun 17 00:55:46.362: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-hkhh container test-container-volume-preprovisionedpv-hkhh: <nil>
STEP: delete the pod
Jun 17 00:55:46.661: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hkhh to disappear
Jun 17 00:55:46.806: INFO: Pod pod-subpath-test-preprovisionedpv-hkhh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hkhh
Jun 17 00:55:46.806: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hkhh" in namespace "provisioning-2119"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:51.804: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 154 lines ...
Jun 17 00:55:36.688: INFO: PersistentVolumeClaim pvc-nzc22 found but phase is Pending instead of Bound.
Jun 17 00:55:38.834: INFO: PersistentVolumeClaim pvc-nzc22 found and phase=Bound (2.29032359s)
Jun 17 00:55:38.834: INFO: Waiting up to 3m0s for PersistentVolume local-4vxbh to have phase Bound
Jun 17 00:55:38.985: INFO: PersistentVolume local-4vxbh found and phase=Bound (150.956759ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-cv82
STEP: Creating a pod to test exec-volume-test
Jun 17 00:55:39.420: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-cv82" in namespace "volume-206" to be "Succeeded or Failed"
Jun 17 00:55:39.565: INFO: Pod "exec-volume-test-preprovisionedpv-cv82": Phase="Pending", Reason="", readiness=false. Elapsed: 144.425198ms
Jun 17 00:55:41.710: INFO: Pod "exec-volume-test-preprovisionedpv-cv82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289894735s
Jun 17 00:55:43.856: INFO: Pod "exec-volume-test-preprovisionedpv-cv82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435812775s
Jun 17 00:55:46.001: INFO: Pod "exec-volume-test-preprovisionedpv-cv82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580587788s
Jun 17 00:55:48.146: INFO: Pod "exec-volume-test-preprovisionedpv-cv82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.725430859s
STEP: Saw pod success
Jun 17 00:55:48.146: INFO: Pod "exec-volume-test-preprovisionedpv-cv82" satisfied condition "Succeeded or Failed"
Jun 17 00:55:48.290: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-cv82 container exec-container-preprovisionedpv-cv82: <nil>
STEP: delete the pod
Jun 17 00:55:48.586: INFO: Waiting for pod exec-volume-test-preprovisionedpv-cv82 to disappear
Jun 17 00:55:48.731: INFO: Pod exec-volume-test-preprovisionedpv-cv82 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-cv82
Jun 17 00:55:48.731: INFO: Deleting pod "exec-volume-test-preprovisionedpv-cv82" in namespace "volume-206"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:52.532: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 128 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:53.101: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 44 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:54.728: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:26.418: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
Jun 17 00:55:30.579: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [nfsfhpd5] to have phase Bound
Jun 17 00:55:30.724: INFO: PersistentVolumeClaim nfsfhpd5 found but phase is Pending instead of Bound.
Jun 17 00:55:32.870: INFO: PersistentVolumeClaim nfsfhpd5 found but phase is Pending instead of Bound.
Jun 17 00:55:35.015: INFO: PersistentVolumeClaim nfsfhpd5 found and phase=Bound (4.435506221s)
STEP: Creating pod pod-subpath-test-dynamicpv-nztv
STEP: Creating a pod to test subpath
Jun 17 00:55:35.453: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nztv" in namespace "provisioning-8857" to be "Succeeded or Failed"
Jun 17 00:55:35.598: INFO: Pod "pod-subpath-test-dynamicpv-nztv": Phase="Pending", Reason="", readiness=false. Elapsed: 144.791958ms
Jun 17 00:55:37.744: INFO: Pod "pod-subpath-test-dynamicpv-nztv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290112223s
Jun 17 00:55:39.890: INFO: Pod "pod-subpath-test-dynamicpv-nztv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.436278253s
STEP: Saw pod success
Jun 17 00:55:39.890: INFO: Pod "pod-subpath-test-dynamicpv-nztv" satisfied condition "Succeeded or Failed"
Jun 17 00:55:40.036: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-nztv container test-container-subpath-dynamicpv-nztv: <nil>
STEP: delete the pod
Jun 17 00:55:40.338: INFO: Waiting for pod pod-subpath-test-dynamicpv-nztv to disappear
Jun 17 00:55:40.485: INFO: Pod pod-subpath-test-dynamicpv-nztv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-nztv
Jun 17 00:55:40.485: INFO: Deleting pod "pod-subpath-test-dynamicpv-nztv" in namespace "provisioning-8857"
STEP: Creating pod pod-subpath-test-dynamicpv-nztv
STEP: Creating a pod to test subpath
Jun 17 00:55:40.778: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nztv" in namespace "provisioning-8857" to be "Succeeded or Failed"
Jun 17 00:55:40.922: INFO: Pod "pod-subpath-test-dynamicpv-nztv": Phase="Pending", Reason="", readiness=false. Elapsed: 144.819869ms
Jun 17 00:55:43.069: INFO: Pod "pod-subpath-test-dynamicpv-nztv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.291054276s
STEP: Saw pod success
Jun 17 00:55:43.069: INFO: Pod "pod-subpath-test-dynamicpv-nztv" satisfied condition "Succeeded or Failed"
Jun 17 00:55:43.214: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-nztv container test-container-subpath-dynamicpv-nztv: <nil>
STEP: delete the pod
Jun 17 00:55:43.513: INFO: Waiting for pod pod-subpath-test-dynamicpv-nztv to disappear
Jun 17 00:55:43.658: INFO: Pod pod-subpath-test-dynamicpv-nztv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-nztv
Jun 17 00:55:43.658: INFO: Deleting pod "pod-subpath-test-dynamicpv-nztv" in namespace "provisioning-8857"
... skipping 39 lines ...
Jun 17 00:55:51.277: INFO: Creating a PV followed by a PVC
Jun 17 00:55:51.570: INFO: Waiting for PV local-pvmdcf4 to bind to PVC pvc-79n4h
Jun 17 00:55:51.570: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-79n4h] to have phase Bound
Jun 17 00:55:51.713: INFO: PersistentVolumeClaim pvc-79n4h found and phase=Bound (143.158845ms)
Jun 17 00:55:51.713: INFO: Waiting up to 3m0s for PersistentVolume local-pvmdcf4 to have phase Bound
Jun 17 00:55:51.857: INFO: PersistentVolume local-pvmdcf4 found and phase=Bound (143.164462ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jun 17 00:55:52.145: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c63cdbbc-3e23-4ff4-a3de-46f070a223c6] Namespace:persistent-local-volumes-test-6646 PodName:hostexec-ip-172-20-60-41.sa-east-1.compute.internal-lwtmd ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jun 17 00:55:52.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:9.140 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
... skipping 20 lines ...
Jun 17 00:55:37.028: INFO: PersistentVolumeClaim pvc-h682h found but phase is Pending instead of Bound.
Jun 17 00:55:39.176: INFO: PersistentVolumeClaim pvc-h682h found and phase=Bound (13.02928784s)
Jun 17 00:55:39.176: INFO: Waiting up to 3m0s for PersistentVolume local-42mld to have phase Bound
Jun 17 00:55:39.321: INFO: PersistentVolume local-42mld found and phase=Bound (144.689322ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8wv2
STEP: Creating a pod to test subpath
Jun 17 00:55:39.755: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8wv2" in namespace "provisioning-8116" to be "Succeeded or Failed"
Jun 17 00:55:39.899: INFO: Pod "pod-subpath-test-preprovisionedpv-8wv2": Phase="Pending", Reason="", readiness=false. Elapsed: 144.245427ms
Jun 17 00:55:42.045: INFO: Pod "pod-subpath-test-preprovisionedpv-8wv2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290057006s
Jun 17 00:55:44.190: INFO: Pod "pod-subpath-test-preprovisionedpv-8wv2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435140381s
Jun 17 00:55:46.335: INFO: Pod "pod-subpath-test-preprovisionedpv-8wv2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58015325s
Jun 17 00:55:48.482: INFO: Pod "pod-subpath-test-preprovisionedpv-8wv2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726403777s
Jun 17 00:55:50.628: INFO: Pod "pod-subpath-test-preprovisionedpv-8wv2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872499408s
Jun 17 00:55:52.773: INFO: Pod "pod-subpath-test-preprovisionedpv-8wv2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.018226613s
STEP: Saw pod success
Jun 17 00:55:52.774: INFO: Pod "pod-subpath-test-preprovisionedpv-8wv2" satisfied condition "Succeeded or Failed"
Jun 17 00:55:52.918: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-8wv2 container test-container-subpath-preprovisionedpv-8wv2: <nil>
STEP: delete the pod
Jun 17 00:55:53.239: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8wv2 to disappear
Jun 17 00:55:53.385: INFO: Pod pod-subpath-test-preprovisionedpv-8wv2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8wv2
Jun 17 00:55:53.385: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8wv2" in namespace "provisioning-8116"
... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:52.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-cb924648-176a-4a54-949a-4acfa3b95bbc
STEP: Creating a pod to test consume configMaps
Jun 17 00:55:53.592: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21" in namespace "projected-3292" to be "Succeeded or Failed"
Jun 17 00:55:53.737: INFO: Pod "pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21": Phase="Pending", Reason="", readiness=false. Elapsed: 145.162413ms
Jun 17 00:55:55.882: INFO: Pod "pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290408917s
Jun 17 00:55:58.027: INFO: Pod "pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435068325s
STEP: Saw pod success
Jun 17 00:55:58.027: INFO: Pod "pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21" satisfied condition "Succeeded or Failed"
Jun 17 00:55:58.182: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21 container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:55:58.479: INFO: Waiting for pod pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21 to disappear
Jun 17 00:55:58.626: INFO: Pod pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.362 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:58.944: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":26,"failed":0}
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:55.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:55:58.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-93" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:55:59.140: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 244 lines ...
• [SLOW TEST:8.898 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:56:03.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename multi-az
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 132 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:05.966: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Jun 17 00:55:59.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 17 00:56:00.146: INFO: Waiting up to 5m0s for pod "pod-c4df1b72-ae5b-471f-be32-541f5156f071" in namespace "emptydir-4893" to be "Succeeded or Failed"
Jun 17 00:56:00.292: INFO: Pod "pod-c4df1b72-ae5b-471f-be32-541f5156f071": Phase="Pending", Reason="", readiness=false. Elapsed: 145.231252ms
Jun 17 00:56:02.440: INFO: Pod "pod-c4df1b72-ae5b-471f-be32-541f5156f071": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293787706s
Jun 17 00:56:04.587: INFO: Pod "pod-c4df1b72-ae5b-471f-be32-541f5156f071": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440131341s
Jun 17 00:56:06.732: INFO: Pod "pod-c4df1b72-ae5b-471f-be32-541f5156f071": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585966829s
STEP: Saw pod success
Jun 17 00:56:06.733: INFO: Pod "pod-c4df1b72-ae5b-471f-be32-541f5156f071" satisfied condition "Succeeded or Failed"
Jun 17 00:56:06.886: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-c4df1b72-ae5b-471f-be32-541f5156f071 container test-container: <nil>
STEP: delete the pod
Jun 17 00:56:07.190: INFO: Waiting for pod pod-c4df1b72-ae5b-471f-be32-541f5156f071 to disappear
Jun 17 00:56:07.335: INFO: Pod pod-c4df1b72-ae5b-471f-be32-541f5156f071 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1001
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:07.649: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 203 lines ...
Jun 17 00:55:20.088: INFO: PersistentVolumeClaim csi-hostpath5dlnf found but phase is Pending instead of Bound.
Jun 17 00:55:22.232: INFO: PersistentVolumeClaim csi-hostpath5dlnf found but phase is Pending instead of Bound.
Jun 17 00:55:24.377: INFO: PersistentVolumeClaim csi-hostpath5dlnf found but phase is Pending instead of Bound.
Jun 17 00:55:26.522: INFO: PersistentVolumeClaim csi-hostpath5dlnf found and phase=Bound (28.048400798s)
STEP: Creating pod pod-subpath-test-dynamicpv-gmmq
STEP: Creating a pod to test subpath
Jun 17 00:55:26.955: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-gmmq" in namespace "provisioning-3575" to be "Succeeded or Failed"
Jun 17 00:55:27.099: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Pending", Reason="", readiness=false. Elapsed: 143.781202ms
Jun 17 00:55:29.244: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288700595s
Jun 17 00:55:31.390: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434598649s
Jun 17 00:55:33.536: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580901273s
Jun 17 00:55:35.681: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72555132s
Jun 17 00:55:37.830: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.87468511s
Jun 17 00:55:39.975: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.019317688s
Jun 17 00:55:42.122: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Pending", Reason="", readiness=false. Elapsed: 15.166174802s
Jun 17 00:55:44.267: INFO: Pod "pod-subpath-test-dynamicpv-gmmq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.31128043s
STEP: Saw pod success
Jun 17 00:55:44.267: INFO: Pod "pod-subpath-test-dynamicpv-gmmq" satisfied condition "Succeeded or Failed"
Jun 17 00:55:44.411: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-gmmq container test-container-subpath-dynamicpv-gmmq: <nil>
STEP: delete the pod
Jun 17 00:55:44.711: INFO: Waiting for pod pod-subpath-test-dynamicpv-gmmq to disappear
Jun 17 00:55:44.854: INFO: Pod pod-subpath-test-dynamicpv-gmmq no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-gmmq
Jun 17 00:55:44.855: INFO: Deleting pod "pod-subpath-test-dynamicpv-gmmq" in namespace "provisioning-3575"
... skipping 55 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":89,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:09.118: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 231 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:11.331: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 25 lines ...
Jun 17 00:55:51.706: INFO: PersistentVolumeClaim pvc-jbr54 found but phase is Pending instead of Bound.
Jun 17 00:55:53.856: INFO: PersistentVolumeClaim pvc-jbr54 found and phase=Bound (2.293851163s)
Jun 17 00:55:53.856: INFO: Waiting up to 3m0s for PersistentVolume local-l5c72 to have phase Bound
Jun 17 00:55:54.000: INFO: PersistentVolume local-l5c72 found and phase=Bound (143.770164ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hn6t
STEP: Creating a pod to test subpath
Jun 17 00:55:54.431: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hn6t" in namespace "provisioning-8767" to be "Succeeded or Failed"
Jun 17 00:55:54.575: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t": Phase="Pending", Reason="", readiness=false. Elapsed: 143.442085ms
Jun 17 00:55:56.723: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291177582s
Jun 17 00:55:58.867: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435698013s
Jun 17 00:56:01.013: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581319337s
Jun 17 00:56:03.157: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.725612033s
STEP: Saw pod success
Jun 17 00:56:03.157: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t" satisfied condition "Succeeded or Failed"
Jun 17 00:56:03.304: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-hn6t container test-container-subpath-preprovisionedpv-hn6t: <nil>
STEP: delete the pod
Jun 17 00:56:03.600: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hn6t to disappear
Jun 17 00:56:03.744: INFO: Pod pod-subpath-test-preprovisionedpv-hn6t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hn6t
Jun 17 00:56:03.744: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hn6t" in namespace "provisioning-8767"
STEP: Creating pod pod-subpath-test-preprovisionedpv-hn6t
STEP: Creating a pod to test subpath
Jun 17 00:56:04.032: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hn6t" in namespace "provisioning-8767" to be "Succeeded or Failed"
Jun 17 00:56:04.176: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t": Phase="Pending", Reason="", readiness=false. Elapsed: 143.998776ms
Jun 17 00:56:06.321: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289080589s
STEP: Saw pod success
Jun 17 00:56:06.321: INFO: Pod "pod-subpath-test-preprovisionedpv-hn6t" satisfied condition "Succeeded or Failed"
Jun 17 00:56:06.466: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-hn6t container test-container-subpath-preprovisionedpv-hn6t: <nil>
STEP: delete the pod
Jun 17 00:56:06.761: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hn6t to disappear
Jun 17 00:56:06.905: INFO: Pod pod-subpath-test-preprovisionedpv-hn6t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hn6t
Jun 17 00:56:06.905: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hn6t" in namespace "provisioning-8767"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:12.137: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
• [SLOW TEST:7.513 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2545" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":9,"skipped":92,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 125 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, have capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":8,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:16.849: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 152 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":7,"skipped":56,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:59.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:436
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":8,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:17.607: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 80 lines ...
Jun 17 00:56:05.733: INFO: PersistentVolumeClaim pvc-dhhqq found but phase is Pending instead of Bound.
Jun 17 00:56:07.879: INFO: PersistentVolumeClaim pvc-dhhqq found and phase=Bound (2.289720109s)
Jun 17 00:56:07.879: INFO: Waiting up to 3m0s for PersistentVolume local-l7nqj to have phase Bound
Jun 17 00:56:08.023: INFO: PersistentVolume local-l7nqj found and phase=Bound (144.430011ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6qwr
STEP: Creating a pod to test subpath
Jun 17 00:56:08.482: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6qwr" in namespace "provisioning-7377" to be "Succeeded or Failed"
Jun 17 00:56:08.630: INFO: Pod "pod-subpath-test-preprovisionedpv-6qwr": Phase="Pending", Reason="", readiness=false. Elapsed: 148.022319ms
Jun 17 00:56:10.778: INFO: Pod "pod-subpath-test-preprovisionedpv-6qwr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296154667s
Jun 17 00:56:12.924: INFO: Pod "pod-subpath-test-preprovisionedpv-6qwr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441810994s
Jun 17 00:56:15.070: INFO: Pod "pod-subpath-test-preprovisionedpv-6qwr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.587659962s
STEP: Saw pod success
Jun 17 00:56:15.070: INFO: Pod "pod-subpath-test-preprovisionedpv-6qwr" satisfied condition "Succeeded or Failed"
Jun 17 00:56:15.214: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-6qwr container test-container-volume-preprovisionedpv-6qwr: <nil>
STEP: delete the pod
Jun 17 00:56:15.517: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6qwr to disappear
Jun 17 00:56:15.665: INFO: Pod pod-subpath-test-preprovisionedpv-6qwr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6qwr
Jun 17 00:56:15.666: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6qwr" in namespace "provisioning-7377"
... skipping 21 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":37,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:17.697: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":6,"skipped":71,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:56:07.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jun 17 00:56:08.532: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c928167f-23f2-4f2b-9b59-02236a5f15ff" in namespace "security-context-test-9731" to be "Succeeded or Failed"
Jun 17 00:56:08.681: INFO: Pod "alpine-nnp-false-c928167f-23f2-4f2b-9b59-02236a5f15ff": Phase="Pending", Reason="", readiness=false. Elapsed: 148.543751ms
Jun 17 00:56:10.826: INFO: Pod "alpine-nnp-false-c928167f-23f2-4f2b-9b59-02236a5f15ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293636187s
Jun 17 00:56:12.971: INFO: Pod "alpine-nnp-false-c928167f-23f2-4f2b-9b59-02236a5f15ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438609889s
Jun 17 00:56:15.117: INFO: Pod "alpine-nnp-false-c928167f-23f2-4f2b-9b59-02236a5f15ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584494431s
Jun 17 00:56:17.262: INFO: Pod "alpine-nnp-false-c928167f-23f2-4f2b-9b59-02236a5f15ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.730202998s
Jun 17 00:56:17.262: INFO: Pod "alpine-nnp-false-c928167f-23f2-4f2b-9b59-02236a5f15ff" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:17.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9731" for this suite.


... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:17.712: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jun 17 00:55:53.880: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 17 00:55:53.880: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pbvl
STEP: Creating a pod to test atomic-volume-subpath
Jun 17 00:55:54.027: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pbvl" in namespace "provisioning-4099" to be "Succeeded or Failed"
Jun 17 00:55:54.171: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Pending", Reason="", readiness=false. Elapsed: 144.003087ms
Jun 17 00:55:56.315: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288448813s
Jun 17 00:55:58.461: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 4.43392703s
Jun 17 00:56:00.606: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 6.578835311s
Jun 17 00:56:02.750: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 8.723173015s
Jun 17 00:56:04.895: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 10.868027228s
Jun 17 00:56:07.046: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 13.019316544s
Jun 17 00:56:09.192: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 15.165231777s
Jun 17 00:56:11.337: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 17.309817839s
Jun 17 00:56:13.482: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 19.455060856s
Jun 17 00:56:15.627: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Running", Reason="", readiness=true. Elapsed: 21.600197296s
Jun 17 00:56:17.772: INFO: Pod "pod-subpath-test-inlinevolume-pbvl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.745429724s
STEP: Saw pod success
Jun 17 00:56:17.773: INFO: Pod "pod-subpath-test-inlinevolume-pbvl" satisfied condition "Succeeded or Failed"
Jun 17 00:56:17.916: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-pbvl container test-container-subpath-inlinevolume-pbvl: <nil>
STEP: delete the pod
Jun 17 00:56:18.221: INFO: Waiting for pod pod-subpath-test-inlinevolume-pbvl to disappear
Jun 17 00:56:18.365: INFO: Pod pod-subpath-test-inlinevolume-pbvl no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pbvl
Jun 17 00:56:18.365: INFO: Deleting pod "pod-subpath-test-inlinevolume-pbvl" in namespace "provisioning-4099"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun 17 00:56:14.469: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2450a255-8f88-41e5-951e-f362aab66ed9"
Jun 17 00:56:14.469: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2450a255-8f88-41e5-951e-f362aab66ed9" in namespace "pods-5913" to be "terminated due to deadline exceeded"
Jun 17 00:56:14.614: INFO: Pod "pod-update-activedeadlineseconds-2450a255-8f88-41e5-951e-f362aab66ed9": Phase="Running", Reason="", readiness=true. Elapsed: 144.366199ms
Jun 17 00:56:16.759: INFO: Pod "pod-update-activedeadlineseconds-2450a255-8f88-41e5-951e-f362aab66ed9": Phase="Running", Reason="", readiness=true. Elapsed: 2.289614954s
Jun 17 00:56:18.905: INFO: Pod "pod-update-activedeadlineseconds-2450a255-8f88-41e5-951e-f362aab66ed9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.435438169s
Jun 17 00:56:18.905: INFO: Pod "pod-update-activedeadlineseconds-2450a255-8f88-41e5-951e-f362aab66ed9" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:18.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5913" for this suite.


• [SLOW TEST:8.972 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:19.229: INFO: Driver local doesn't support ext4 -- skipping
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:56.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
• [SLOW TEST:25.135 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:21.178: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
Jun 17 00:55:37.514: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-8zsh4] to have phase Bound
Jun 17 00:55:37.658: INFO: PersistentVolumeClaim pvc-8zsh4 found and phase=Bound (143.943166ms)
STEP: Deleting the previously created pod
Jun 17 00:55:52.382: INFO: Deleting pod "pvc-volume-tester-mp6fx" in namespace "csi-mock-volumes-8076"
Jun 17 00:55:52.527: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mp6fx" to be fully deleted
STEP: Checking CSI driver logs
Jun 17 00:55:58.970: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7722da4a-93b1-4105-ac0b-042e80578db6/volumes/kubernetes.io~csi/pvc-d2965bbf-cc92-469a-886f-1f7cdfdb19b4/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-mp6fx
Jun 17 00:55:58.970: INFO: Deleting pod "pvc-volume-tester-mp6fx" in namespace "csi-mock-volumes-8076"
STEP: Deleting claim pvc-8zsh4
Jun 17 00:55:59.402: INFO: Waiting up to 2m0s for PersistentVolume pvc-d2965bbf-cc92-469a-886f-1f7cdfdb19b4 to get deleted
Jun 17 00:55:59.550: INFO: PersistentVolume pvc-d2965bbf-cc92-469a-886f-1f7cdfdb19b4 found and phase=Released (147.605793ms)
Jun 17 00:56:01.695: INFO: PersistentVolume pvc-d2965bbf-cc92-469a-886f-1f7cdfdb19b4 found and phase=Released (2.292287034s)
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:21.484: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Jun 17 00:56:17.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 17 00:56:18.668: INFO: Waiting up to 5m0s for pod "security-context-1b0e418d-c863-4c77-96fa-cbea9d6d438c" in namespace "security-context-4319" to be "Succeeded or Failed"
Jun 17 00:56:18.812: INFO: Pod "security-context-1b0e418d-c863-4c77-96fa-cbea9d6d438c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.976736ms
Jun 17 00:56:20.960: INFO: Pod "security-context-1b0e418d-c863-4c77-96fa-cbea9d6d438c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.292313984s
STEP: Saw pod success
Jun 17 00:56:20.961: INFO: Pod "security-context-1b0e418d-c863-4c77-96fa-cbea9d6d438c" satisfied condition "Succeeded or Failed"
Jun 17 00:56:21.108: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod security-context-1b0e418d-c863-4c77-96fa-cbea9d6d438c container test-container: <nil>
STEP: delete the pod
Jun 17 00:56:21.403: INFO: Waiting for pod security-context-1b0e418d-c863-4c77-96fa-cbea9d6d438c to disappear
Jun 17 00:56:21.547: INFO: Pod security-context-1b0e418d-c863-4c77-96fa-cbea9d6d438c no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:21.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-4319" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":86,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:21.856: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-b5b560e8-33a6-4dab-833c-bef37f771ca3
STEP: Creating a pod to test consume secrets
Jun 17 00:56:18.721: INFO: Waiting up to 5m0s for pod "pod-secrets-64f13ff9-6d7a-4325-b130-3767f24ba1c0" in namespace "secrets-2570" to be "Succeeded or Failed"
Jun 17 00:56:18.865: INFO: Pod "pod-secrets-64f13ff9-6d7a-4325-b130-3767f24ba1c0": Phase="Pending", Reason="", readiness=false. Elapsed: 144.333692ms
Jun 17 00:56:21.011: INFO: Pod "pod-secrets-64f13ff9-6d7a-4325-b130-3767f24ba1c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.290469118s
STEP: Saw pod success
Jun 17 00:56:21.012: INFO: Pod "pod-secrets-64f13ff9-6d7a-4325-b130-3767f24ba1c0" satisfied condition "Succeeded or Failed"
Jun 17 00:56:21.156: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-secrets-64f13ff9-6d7a-4325-b130-3767f24ba1c0 container secret-volume-test: <nil>
STEP: delete the pod
Jun 17 00:56:21.456: INFO: Waiting for pod pod-secrets-64f13ff9-6d7a-4325-b130-3767f24ba1c0 to disappear
Jun 17 00:56:21.601: INFO: Pod pod-secrets-64f13ff9-6d7a-4325-b130-3767f24ba1c0 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:21.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2570" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:21.905: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
      Driver "nfs" does not support FsGroup - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:56.430: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
Jun 17 00:56:07.277: INFO: PersistentVolumeClaim pvc-2gccw found but phase is Pending instead of Bound.
Jun 17 00:56:09.425: INFO: PersistentVolumeClaim pvc-2gccw found and phase=Bound (4.440801158s)
Jun 17 00:56:09.425: INFO: Waiting up to 3m0s for PersistentVolume local-kjptg to have phase Bound
Jun 17 00:56:09.571: INFO: PersistentVolume local-kjptg found and phase=Bound (145.2727ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rs5k
STEP: Creating a pod to test subpath
Jun 17 00:56:10.008: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rs5k" in namespace "provisioning-5551" to be "Succeeded or Failed"
Jun 17 00:56:10.152: INFO: Pod "pod-subpath-test-preprovisionedpv-rs5k": Phase="Pending", Reason="", readiness=false. Elapsed: 144.638986ms
Jun 17 00:56:12.297: INFO: Pod "pod-subpath-test-preprovisionedpv-rs5k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289215884s
Jun 17 00:56:14.442: INFO: Pod "pod-subpath-test-preprovisionedpv-rs5k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434212606s
Jun 17 00:56:16.588: INFO: Pod "pod-subpath-test-preprovisionedpv-rs5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580284233s
STEP: Saw pod success
Jun 17 00:56:16.588: INFO: Pod "pod-subpath-test-preprovisionedpv-rs5k" satisfied condition "Succeeded or Failed"
Jun 17 00:56:16.732: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-rs5k container test-container-subpath-preprovisionedpv-rs5k: <nil>
STEP: delete the pod
Jun 17 00:56:17.035: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rs5k to disappear
Jun 17 00:56:17.180: INFO: Pod pod-subpath-test-preprovisionedpv-rs5k no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rs5k
Jun 17 00:56:17.180: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rs5k" in namespace "provisioning-5551"
STEP: Creating pod pod-subpath-test-preprovisionedpv-rs5k
STEP: Creating a pod to test subpath
Jun 17 00:56:17.471: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rs5k" in namespace "provisioning-5551" to be "Succeeded or Failed"
Jun 17 00:56:17.615: INFO: Pod "pod-subpath-test-preprovisionedpv-rs5k": Phase="Pending", Reason="", readiness=false. Elapsed: 144.123767ms
Jun 17 00:56:19.760: INFO: Pod "pod-subpath-test-preprovisionedpv-rs5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289059682s
STEP: Saw pod success
Jun 17 00:56:19.760: INFO: Pod "pod-subpath-test-preprovisionedpv-rs5k" satisfied condition "Succeeded or Failed"
Jun 17 00:56:19.904: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-rs5k container test-container-subpath-preprovisionedpv-rs5k: <nil>
STEP: delete the pod
Jun 17 00:56:20.214: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rs5k to disappear
Jun 17 00:56:20.358: INFO: Pod pod-subpath-test-preprovisionedpv-rs5k no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rs5k
Jun 17 00:56:20.358: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rs5k" in namespace "provisioning-5551"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":11,"skipped":75,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:22.387: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:22.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-8062" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:52:24.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 222 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should provide basic identity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:126
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":3,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":6,"skipped":25,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:23.074: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 47 lines ...
Jun 17 00:55:51.277: INFO: PersistentVolumeClaim pvc-v6qzb found but phase is Pending instead of Bound.
Jun 17 00:55:53.422: INFO: PersistentVolumeClaim pvc-v6qzb found and phase=Bound (15.15865522s)
Jun 17 00:55:53.422: INFO: Waiting up to 3m0s for PersistentVolume local-g7z94 to have phase Bound
Jun 17 00:55:53.578: INFO: PersistentVolume local-g7z94 found and phase=Bound (155.548205ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9257
STEP: Creating a pod to test atomic-volume-subpath
Jun 17 00:55:54.010: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9257" in namespace "provisioning-477" to be "Succeeded or Failed"
Jun 17 00:55:54.154: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Pending", Reason="", readiness=false. Elapsed: 143.54876ms
Jun 17 00:55:56.298: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287781446s
Jun 17 00:55:58.443: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432210399s
Jun 17 00:56:00.588: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577163119s
Jun 17 00:56:02.731: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Running", Reason="", readiness=true. Elapsed: 8.72092337s
Jun 17 00:56:04.876: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Running", Reason="", readiness=true. Elapsed: 10.865876855s
Jun 17 00:56:07.021: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Running", Reason="", readiness=true. Elapsed: 13.010012326s
Jun 17 00:56:09.166: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Running", Reason="", readiness=true. Elapsed: 15.155846642s
Jun 17 00:56:11.311: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Running", Reason="", readiness=true. Elapsed: 17.30011858s
Jun 17 00:56:13.459: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Running", Reason="", readiness=true. Elapsed: 19.448143842s
Jun 17 00:56:15.604: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Running", Reason="", readiness=true. Elapsed: 21.593405729s
Jun 17 00:56:17.749: INFO: Pod "pod-subpath-test-preprovisionedpv-9257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.738037462s
STEP: Saw pod success
Jun 17 00:56:17.749: INFO: Pod "pod-subpath-test-preprovisionedpv-9257" satisfied condition "Succeeded or Failed"
Jun 17 00:56:17.892: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-9257 container test-container-subpath-preprovisionedpv-9257: <nil>
STEP: delete the pod
Jun 17 00:56:18.186: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9257 to disappear
Jun 17 00:56:18.330: INFO: Pod pod-subpath-test-preprovisionedpv-9257 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9257
Jun 17 00:56:18.330: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9257" in namespace "provisioning-477"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":9,"skipped":64,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:6.849 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":103,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 28 lines ...
Jun 17 00:55:51.029: INFO: PersistentVolumeClaim pvc-s4pds found but phase is Pending instead of Bound.
Jun 17 00:55:53.179: INFO: PersistentVolumeClaim pvc-s4pds found and phase=Bound (15.159351365s)
Jun 17 00:55:53.179: INFO: Waiting up to 3m0s for PersistentVolume nfs-9ttdg to have phase Bound
Jun 17 00:55:53.325: INFO: PersistentVolume nfs-9ttdg found and phase=Bound (145.540497ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6wgg
STEP: Creating a pod to test exec-volume-test
Jun 17 00:55:53.768: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6wgg" in namespace "volume-5249" to be "Succeeded or Failed"
Jun 17 00:55:53.912: INFO: Pod "exec-volume-test-preprovisionedpv-6wgg": Phase="Pending", Reason="", readiness=false. Elapsed: 143.359326ms
Jun 17 00:55:56.058: INFO: Pod "exec-volume-test-preprovisionedpv-6wgg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289547382s
Jun 17 00:55:58.201: INFO: Pod "exec-volume-test-preprovisionedpv-6wgg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432881686s
STEP: Saw pod success
Jun 17 00:55:58.201: INFO: Pod "exec-volume-test-preprovisionedpv-6wgg" satisfied condition "Succeeded or Failed"
Jun 17 00:55:58.344: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-6wgg container exec-container-preprovisionedpv-6wgg: <nil>
STEP: delete the pod
Jun 17 00:55:58.639: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6wgg to disappear
Jun 17 00:55:58.782: INFO: Pod exec-volume-test-preprovisionedpv-6wgg no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6wgg
Jun 17 00:55:58.782: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6wgg" in namespace "volume-5249"
... skipping 28 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-3169b958-3848-4ac3-abb2-57a6c5567de3
STEP: Creating a pod to test consume configMaps
Jun 17 00:56:20.271: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2" in namespace "configmap-2549" to be "Succeeded or Failed"
Jun 17 00:56:20.416: INFO: Pod "pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 145.159836ms
Jun 17 00:56:22.563: INFO: Pod "pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291766883s
Jun 17 00:56:24.715: INFO: Pod "pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.444143961s
STEP: Saw pod success
Jun 17 00:56:24.716: INFO: Pod "pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2" satisfied condition "Succeeded or Failed"
Jun 17 00:56:24.864: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2 container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:56:25.163: INFO: Waiting for pod pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2 to disappear
Jun 17 00:56:25.307: INFO: Pod pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.353 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:25.614: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 58 lines ...
Jun 17 00:54:39.914: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4182
Jun 17 00:54:40.078: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4182
Jun 17 00:54:40.223: INFO: creating *v1.StatefulSet: csi-mock-volumes-4182-4345/csi-mockplugin
Jun 17 00:54:40.368: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4182
Jun 17 00:54:40.512: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4182"
Jun 17 00:54:40.655: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4182 to register on node ip-172-20-60-41.sa-east-1.compute.internal
I0617 00:54:51.034620    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0617 00:54:51.178865    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4182","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0617 00:54:51.323529    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0617 00:54:51.467789    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0617 00:54:51.791182    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4182","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0617 00:54:52.560829    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4182"},"Error":"","FullError":null}
STEP: Creating pod
Jun 17 00:54:57.925: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 17 00:54:58.073: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rnsw6] to have phase Bound
I0617 00:54:58.083229    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-bc026503-75bd-48ee-a483-c769b8e6370e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Jun 17 00:54:58.217: INFO: PersistentVolumeClaim pvc-rnsw6 found but phase is Pending instead of Bound.
I0617 00:54:58.232938    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-bc026503-75bd-48ee-a483-c769b8e6370e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-bc026503-75bd-48ee-a483-c769b8e6370e"}}},"Error":"","FullError":null}
Jun 17 00:55:00.362: INFO: PersistentVolumeClaim pvc-rnsw6 found and phase=Bound (2.288646777s)
I0617 00:55:01.863471    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 17 00:55:02.013: INFO: >>> kubeConfig: /root/.kube/config
I0617 00:55:02.965617    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bc026503-75bd-48ee-a483-c769b8e6370e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bc026503-75bd-48ee-a483-c769b8e6370e","storage.kubernetes.io/csiProvisionerIdentity":"1623891291558-8081-csi-mock-csi-mock-volumes-4182"}},"Response":{},"Error":"","FullError":null}
I0617 00:55:03.519769    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jun 17 00:55:03.665: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 00:55:04.620: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 00:55:05.585: INFO: >>> kubeConfig: /root/.kube/config
I0617 00:55:06.531730    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bc026503-75bd-48ee-a483-c769b8e6370e/globalmount","target_path":"/var/lib/kubelet/pods/69587423-067a-4450-891e-af3c80fa0664/volumes/kubernetes.io~csi/pvc-bc026503-75bd-48ee-a483-c769b8e6370e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bc026503-75bd-48ee-a483-c769b8e6370e","storage.kubernetes.io/csiProvisionerIdentity":"1623891291558-8081-csi-mock-csi-mock-volumes-4182"}},"Response":{},"Error":"","FullError":null}
Jun 17 00:55:19.089: INFO: Deleting pod "pvc-volume-tester-9rgj4" in namespace "csi-mock-volumes-4182"
Jun 17 00:55:19.234: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9rgj4" to be fully deleted
Jun 17 00:55:19.706: INFO: >>> kubeConfig: /root/.kube/config
I0617 00:55:20.663490    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/69587423-067a-4450-891e-af3c80fa0664/volumes/kubernetes.io~csi/pvc-bc026503-75bd-48ee-a483-c769b8e6370e/mount"},"Response":{},"Error":"","FullError":null}
I0617 00:55:20.824592    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0617 00:55:20.968825    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bc026503-75bd-48ee-a483-c769b8e6370e/globalmount"},"Response":{},"Error":"","FullError":null}
I0617 00:55:29.688176    4910 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jun 17 00:55:30.670: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rnsw6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4182", SelfLink:"", UID:"bc026503-75bd-48ee-a483-c769b8e6370e", ResourceVersion:"7904", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63759488098, loc:(*time.Location)(0x9dde5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d06150), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d06168)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001e71ab0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001e71ac0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jun 17 00:55:30.670: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rnsw6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4182", SelfLink:"", UID:"bc026503-75bd-48ee-a483-c769b8e6370e", ResourceVersion:"7905", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63759488098, loc:(*time.Location)(0x9dde5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4182"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002e183f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e18408)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002e18420), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e18438)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001c91320), VolumeMode:(*v1.PersistentVolumeMode)(0xc001c91330), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jun 17 00:55:30.671: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rnsw6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4182", SelfLink:"", UID:"bc026503-75bd-48ee-a483-c769b8e6370e", ResourceVersion:"7914", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63759488098, loc:(*time.Location)(0x9dde5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4182"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030529d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030529f0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003052a08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003052a20)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-bc026503-75bd-48ee-a483-c769b8e6370e", StorageClassName:(*string)(0xc0030288f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003028900), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jun 17 00:55:30.671: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rnsw6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4182", SelfLink:"", UID:"bc026503-75bd-48ee-a483-c769b8e6370e", ResourceVersion:"7915", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63759488098, loc:(*time.Location)(0x9dde5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4182"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003052a50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003052a68)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003052a80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003052a98)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-bc026503-75bd-48ee-a483-c769b8e6370e", StorageClassName:(*string)(0xc003028930), VolumeMode:(*v1.PersistentVolumeMode)(0xc003028940), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jun 17 00:55:30.671: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rnsw6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4182", SelfLink:"", UID:"bc026503-75bd-48ee-a483-c769b8e6370e", ResourceVersion:"9124", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63759488098, loc:(*time.Location)(0x9dde5a0)}}, DeletionTimestamp:(*v1.Time)(0xc003052ac8), DeletionGracePeriodSeconds:(*int64)(0xc00318ae48), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4182"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003052ae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003052af8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003052b10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003052b28)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-bc026503-75bd-48ee-a483-c769b8e6370e", StorageClassName:(*string)(0xc0030289c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0030289d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":57,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:56:24.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:27.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8215" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:27.501: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 21 lines ...
Jun 17 00:56:23.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 17 00:56:24.398: INFO: Waiting up to 5m0s for pod "pod-bb7114a6-e8a1-46d4-b380-f0340118fd48" in namespace "emptydir-7351" to be "Succeeded or Failed"
Jun 17 00:56:24.546: INFO: Pod "pod-bb7114a6-e8a1-46d4-b380-f0340118fd48": Phase="Pending", Reason="", readiness=false. Elapsed: 148.261474ms
Jun 17 00:56:26.690: INFO: Pod "pod-bb7114a6-e8a1-46d4-b380-f0340118fd48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.292453209s
STEP: Saw pod success
Jun 17 00:56:26.690: INFO: Pod "pod-bb7114a6-e8a1-46d4-b380-f0340118fd48" satisfied condition "Succeeded or Failed"
Jun 17 00:56:26.834: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-bb7114a6-e8a1-46d4-b380-f0340118fd48 container test-container: <nil>
STEP: delete the pod
Jun 17 00:56:27.134: INFO: Waiting for pod pod-bb7114a6-e8a1-46d4-b380-f0340118fd48 to disappear
Jun 17 00:56:27.277: INFO: Pod pod-bb7114a6-e8a1-46d4-b380-f0340118fd48 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:27.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7351" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":67,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:27.584: INFO: Only supported for providers [gce gke] (not aws)
... skipping 64 lines ...
• [SLOW TEST:13.560 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":10,"skipped":96,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:27.840: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jun 17 00:56:23.565: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172" in namespace "downward-api-1842" to be "Succeeded or Failed"
Jun 17 00:56:23.710: INFO: Pod "downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172": Phase="Pending", Reason="", readiness=false. Elapsed: 144.074681ms
Jun 17 00:56:25.854: INFO: Pod "downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288333047s
Jun 17 00:56:27.998: INFO: Pod "downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432767936s
STEP: Saw pod success
Jun 17 00:56:27.998: INFO: Pod "downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172" satisfied condition "Succeeded or Failed"
Jun 17 00:56:28.152: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172 container client-container: <nil>
STEP: delete the pod
Jun 17 00:56:28.467: INFO: Waiting for pod downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172 to disappear
Jun 17 00:56:28.614: INFO: Pod downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.222 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:30.551: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":7,"skipped":44,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:56:27.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Jun 17 00:56:28.268: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2285" to be "Succeeded or Failed"
Jun 17 00:56:28.417: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 148.321871ms
Jun 17 00:56:30.561: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.292469919s
Jun 17 00:56:30.561: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:30.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2285" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":8,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
• [SLOW TEST:25.002 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":8,"skipped":60,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:248.147 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:33.725: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
• [SLOW TEST:9.872 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":10,"skipped":104,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:33.787: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 191 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":3,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
Jun 17 00:56:28.621: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 17 00:56:28.621: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zq4w
STEP: Creating a pod to test subpath
Jun 17 00:56:28.774: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zq4w" in namespace "provisioning-8765" to be "Succeeded or Failed"
Jun 17 00:56:28.918: INFO: Pod "pod-subpath-test-inlinevolume-zq4w": Phase="Pending", Reason="", readiness=false. Elapsed: 144.087708ms
Jun 17 00:56:31.063: INFO: Pod "pod-subpath-test-inlinevolume-zq4w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288479891s
Jun 17 00:56:33.211: INFO: Pod "pod-subpath-test-inlinevolume-zq4w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436542592s
Jun 17 00:56:35.383: INFO: Pod "pod-subpath-test-inlinevolume-zq4w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.608391451s
STEP: Saw pod success
Jun 17 00:56:35.383: INFO: Pod "pod-subpath-test-inlinevolume-zq4w" satisfied condition "Succeeded or Failed"
Jun 17 00:56:35.569: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-zq4w container test-container-subpath-inlinevolume-zq4w: <nil>
STEP: delete the pod
Jun 17 00:56:35.898: INFO: Waiting for pod pod-subpath-test-inlinevolume-zq4w to disappear
Jun 17 00:56:36.069: INFO: Pod pod-subpath-test-inlinevolume-zq4w no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zq4w
Jun 17 00:56:36.069: INFO: Deleting pod "pod-subpath-test-inlinevolume-zq4w" in namespace "provisioning-8765"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":100,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:36.674: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:56:32.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jun 17 00:56:33.452: INFO: Waiting up to 5m0s for pod "downward-api-9b5e50ab-5654-493a-8af3-715db3e92564" in namespace "downward-api-3002" to be "Succeeded or Failed"
Jun 17 00:56:33.596: INFO: Pod "downward-api-9b5e50ab-5654-493a-8af3-715db3e92564": Phase="Pending", Reason="", readiness=false. Elapsed: 144.104465ms
Jun 17 00:56:35.741: INFO: Pod "downward-api-9b5e50ab-5654-493a-8af3-715db3e92564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.28919697s
STEP: Saw pod success
Jun 17 00:56:35.741: INFO: Pod "downward-api-9b5e50ab-5654-493a-8af3-715db3e92564" satisfied condition "Succeeded or Failed"
Jun 17 00:56:35.887: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod downward-api-9b5e50ab-5654-493a-8af3-715db3e92564 container dapi-container: <nil>
STEP: delete the pod
Jun 17 00:56:36.236: INFO: Waiting for pod downward-api-9b5e50ab-5654-493a-8af3-715db3e92564 to disappear
Jun 17 00:56:36.385: INFO: Pod downward-api-9b5e50ab-5654-493a-8af3-715db3e92564 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:36.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3002" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":63,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:38.693: INFO: Only supported for providers [azure] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-0dae40df-3c29-4d4e-b09b-ad033ea24023
STEP: Creating a pod to test consume secrets
Jun 17 00:56:35.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b8462d5-3731-4312-b62e-ccbc9516c7ee" in namespace "projected-6987" to be "Succeeded or Failed"
Jun 17 00:56:35.570: INFO: Pod "pod-projected-secrets-9b8462d5-3731-4312-b62e-ccbc9516c7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 186.256916ms
Jun 17 00:56:37.715: INFO: Pod "pod-projected-secrets-9b8462d5-3731-4312-b62e-ccbc9516c7ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.330762242s
STEP: Saw pod success
Jun 17 00:56:37.715: INFO: Pod "pod-projected-secrets-9b8462d5-3731-4312-b62e-ccbc9516c7ee" satisfied condition "Succeeded or Failed"
Jun 17 00:56:37.869: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-projected-secrets-9b8462d5-3731-4312-b62e-ccbc9516c7ee container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 17 00:56:38.176: INFO: Waiting for pod pod-projected-secrets-9b8462d5-3731-4312-b62e-ccbc9516c7ee to disappear
Jun 17 00:56:38.325: INFO: Pod pod-projected-secrets-9b8462d5-3731-4312-b62e-ccbc9516c7ee no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
• [SLOW TEST:5.027 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:38.777: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 97 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:39.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6711" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:39.935: INFO: Only supported for providers [gce gke] (not aws)
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":7,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:39.982: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 45 lines ...
• [SLOW TEST:6.574 seconds]
[sig-node] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":11,"skipped":117,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:40.532: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":12,"skipped":86,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:42.760: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 154 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:45.357: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 365 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":13,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:46.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8942" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:47.031: INFO: Only supported for providers [gce gke] (not aws)
... skipping 89 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
STEP: Creating configMap with name configmap-test-volume-a60bc8ad-3edf-41bf-86ef-797313c778d4
STEP: Creating a pod to test consume configMaps
Jun 17 00:56:37.789: INFO: Waiting up to 5m0s for pod "pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f" in namespace "configmap-382" to be "Succeeded or Failed"
Jun 17 00:56:37.933: INFO: Pod "pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f": Phase="Pending", Reason="", readiness=false. Elapsed: 144.299532ms
Jun 17 00:56:40.079: INFO: Pod "pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290624669s
Jun 17 00:56:42.224: INFO: Pod "pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435154663s
Jun 17 00:56:44.368: INFO: Pod "pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579905888s
Jun 17 00:56:46.520: INFO: Pod "pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.731772939s
STEP: Saw pod success
Jun 17 00:56:46.521: INFO: Pod "pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f" satisfied condition "Succeeded or Failed"
Jun 17 00:56:46.665: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:56:46.967: INFO: Waiting for pod pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f to disappear
Jun 17 00:56:47.111: INFO: Pod pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.663 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:20.980 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":75,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
Jun 17 00:56:44.289: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-334 explain e2e-test-crd-publish-openapi-4309-crds.spec'
Jun 17 00:56:44.978: INFO: stderr: ""
Jun 17 00:56:44.978: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4309-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jun 17 00:56:44.978: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-334 explain e2e-test-crd-publish-openapi-4309-crds.spec.bars'
Jun 17 00:56:45.663: INFO: stderr: ""
Jun 17 00:56:45.663: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4309-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jun 17 00:56:45.663: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-334 explain e2e-test-crd-publish-openapi-4309-crds.spec.bars2'
Jun 17 00:56:46.335: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:56:52.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-334" for this suite.
... skipping 2 lines ...
• [SLOW TEST:24.954 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":11,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:52.612: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 126 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Container restart
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130
    should verify that container can restart successfully after configmaps modified
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":5,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:56.319: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
• [SLOW TEST:29.512 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":7,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:56:58.467: INFO: Driver local doesn't support ext4 -- skipping
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:01.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6713" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":8,"skipped":22,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":64,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:56:53.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Jun 17 00:56:54.408: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:01.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4397" for this suite.


• [SLOW TEST:8.179 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":64,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":7,"skipped":23,"failed":0}
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:00.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename podtemplate
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:02.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-4533" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":8,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:02.647: INFO: >>> kubeConfig: /root/.kube/config
... skipping 68 lines ...
STEP: Destroying namespace "apply-2301" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":9,"skipped":23,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":10,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:03.793: INFO: >>> kubeConfig: /root/.kube/config
... skipping 166 lines ...
Jun 17 00:56:28.693: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 17 00:56:28.840: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath69lfs] to have phase Bound
Jun 17 00:56:28.984: INFO: PersistentVolumeClaim csi-hostpath69lfs found but phase is Pending instead of Bound.
Jun 17 00:56:31.130: INFO: PersistentVolumeClaim csi-hostpath69lfs found and phase=Bound (2.289959705s)
STEP: Creating pod pod-subpath-test-dynamicpv-fx2h
STEP: Creating a pod to test subpath
Jun 17 00:56:31.572: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-fx2h" in namespace "provisioning-5901" to be "Succeeded or Failed"
Jun 17 00:56:31.717: INFO: Pod "pod-subpath-test-dynamicpv-fx2h": Phase="Pending", Reason="", readiness=false. Elapsed: 144.75283ms
Jun 17 00:56:33.862: INFO: Pod "pod-subpath-test-dynamicpv-fx2h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289537876s
Jun 17 00:56:36.015: INFO: Pod "pod-subpath-test-dynamicpv-fx2h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442398922s
Jun 17 00:56:38.161: INFO: Pod "pod-subpath-test-dynamicpv-fx2h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588444563s
Jun 17 00:56:40.305: INFO: Pod "pod-subpath-test-dynamicpv-fx2h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.733114285s
STEP: Saw pod success
Jun 17 00:56:40.306: INFO: Pod "pod-subpath-test-dynamicpv-fx2h" satisfied condition "Succeeded or Failed"
Jun 17 00:56:40.456: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-fx2h container test-container-volume-dynamicpv-fx2h: <nil>
STEP: delete the pod
Jun 17 00:56:40.788: INFO: Waiting for pod pod-subpath-test-dynamicpv-fx2h to disappear
Jun 17 00:56:40.934: INFO: Pod pod-subpath-test-dynamicpv-fx2h no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-fx2h
Jun 17 00:56:40.934: INFO: Deleting pod "pod-subpath-test-dynamicpv-fx2h" in namespace "provisioning-5901"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:05.182: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
• [SLOW TEST:112.177 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:283
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":7,"skipped":54,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:10.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1224" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":8,"skipped":60,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:11.213: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Jun 17 00:57:04.546: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun 17 00:57:04.546: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-cc9t
STEP: Creating a pod to test subpath
Jun 17 00:57:04.695: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-cc9t" in namespace "provisioning-6081" to be "Succeeded or Failed"
Jun 17 00:57:04.839: INFO: Pod "pod-subpath-test-inlinevolume-cc9t": Phase="Pending", Reason="", readiness=false. Elapsed: 143.823645ms
Jun 17 00:57:06.989: INFO: Pod "pod-subpath-test-inlinevolume-cc9t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293441794s
Jun 17 00:57:09.134: INFO: Pod "pod-subpath-test-inlinevolume-cc9t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438252113s
Jun 17 00:57:11.279: INFO: Pod "pod-subpath-test-inlinevolume-cc9t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.583851322s
STEP: Saw pod success
Jun 17 00:57:11.279: INFO: Pod "pod-subpath-test-inlinevolume-cc9t" satisfied condition "Succeeded or Failed"
Jun 17 00:57:11.423: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-cc9t container test-container-volume-inlinevolume-cc9t: <nil>
STEP: delete the pod
Jun 17 00:57:11.722: INFO: Waiting for pod pod-subpath-test-inlinevolume-cc9t to disappear
Jun 17 00:57:11.866: INFO: Pod pod-subpath-test-inlinevolume-cc9t no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-cc9t
Jun 17 00:57:11.866: INFO: Deleting pod "pod-subpath-test-inlinevolume-cc9t" in namespace "provisioning-6081"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:12.473: INFO: Driver hostPath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:55:16.554: INFO: >>> kubeConfig: /root/.kube/config
... skipping 174 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":58,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:12.511: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 152 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 17 00:57:04.559: INFO: Waiting up to 5m0s for pod "pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0" in namespace "emptydir-5252" to be "Succeeded or Failed"
Jun 17 00:57:04.707: INFO: Pod "pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0": Phase="Pending", Reason="", readiness=false. Elapsed: 148.011618ms
Jun 17 00:57:06.853: INFO: Pod "pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29342118s
Jun 17 00:57:09.001: INFO: Pod "pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441420609s
Jun 17 00:57:11.147: INFO: Pod "pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.587122497s
Jun 17 00:57:13.294: INFO: Pod "pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.734922955s
STEP: Saw pod success
Jun 17 00:57:13.294: INFO: Pod "pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0" satisfied condition "Succeeded or Failed"
Jun 17 00:57:13.441: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0 container test-container: <nil>
STEP: delete the pod
Jun 17 00:57:13.736: INFO: Waiting for pod pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0 to disappear
Jun 17 00:57:13.880: INFO: Pod pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":9,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:14.183: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 100 lines ...
STEP: creating a claim
Jun 17 00:57:02.548: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 17 00:57:02.696: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [nfsdktqq] to have phase Bound
Jun 17 00:57:02.841: INFO: PersistentVolumeClaim nfsdktqq found and phase=Bound (144.258154ms)
STEP: Creating pod pod-subpath-test-dynamicpv-dzkk
STEP: Creating a pod to test subpath
Jun 17 00:57:03.279: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dzkk" in namespace "provisioning-5573" to be "Succeeded or Failed"
Jun 17 00:57:03.423: INFO: Pod "pod-subpath-test-dynamicpv-dzkk": Phase="Pending", Reason="", readiness=false. Elapsed: 144.242108ms
Jun 17 00:57:05.568: INFO: Pod "pod-subpath-test-dynamicpv-dzkk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289291824s
Jun 17 00:57:07.721: INFO: Pod "pod-subpath-test-dynamicpv-dzkk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441696176s
Jun 17 00:57:09.865: INFO: Pod "pod-subpath-test-dynamicpv-dzkk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.586442762s
STEP: Saw pod success
Jun 17 00:57:09.866: INFO: Pod "pod-subpath-test-dynamicpv-dzkk" satisfied condition "Succeeded or Failed"
Jun 17 00:57:10.012: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-dzkk container test-container-volume-dynamicpv-dzkk: <nil>
STEP: delete the pod
Jun 17 00:57:10.347: INFO: Waiting for pod pod-subpath-test-dynamicpv-dzkk to disappear
Jun 17 00:57:10.491: INFO: Pod pod-subpath-test-dynamicpv-dzkk no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dzkk
Jun 17 00:57:10.492: INFO: Deleting pod "pod-subpath-test-dynamicpv-dzkk" in namespace "provisioning-5573"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":68,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
Jun 17 00:56:53.949: INFO: PersistentVolumeClaim pvc-7w4jg found and phase=Bound (8.733391435s)
Jun 17 00:56:53.949: INFO: Waiting up to 3m0s for PersistentVolume nfs-2l7v5 to have phase Bound
Jun 17 00:56:54.093: INFO: PersistentVolume nfs-2l7v5 found and phase=Bound (144.508554ms)
STEP: Checking pod has write access to PersistentVolume
Jun 17 00:56:54.382: INFO: Creating nfs test pod
Jun 17 00:56:54.530: INFO: Pod should terminate with exitcode 0 (success)
Jun 17 00:56:54.530: INFO: Waiting up to 5m0s for pod "pvc-tester-7nhz7" in namespace "pv-7475" to be "Succeeded or Failed"
Jun 17 00:56:54.676: INFO: Pod "pvc-tester-7nhz7": Phase="Pending", Reason="", readiness=false. Elapsed: 145.847379ms
Jun 17 00:56:56.820: INFO: Pod "pvc-tester-7nhz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290171106s
Jun 17 00:56:58.965: INFO: Pod "pvc-tester-7nhz7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434662509s
Jun 17 00:57:01.110: INFO: Pod "pvc-tester-7nhz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579689728s
STEP: Saw pod success
Jun 17 00:57:01.110: INFO: Pod "pvc-tester-7nhz7" satisfied condition "Succeeded or Failed"
Jun 17 00:57:01.110: INFO: Pod pvc-tester-7nhz7 succeeded 
Jun 17 00:57:01.110: INFO: Deleting pod "pvc-tester-7nhz7" in namespace "pv-7475"
Jun 17 00:57:01.259: INFO: Wait up to 5m0s for pod "pvc-tester-7nhz7" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jun 17 00:57:01.403: INFO: Deleting PVC pvc-7w4jg to trigger reclamation of PV 
Jun 17 00:57:01.403: INFO: Deleting PersistentVolumeClaim "pvc-7w4jg"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":4,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":125,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:17.383: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 115 lines ...
Jun 17 00:56:05.275: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun 17 00:56:05.423: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathbqh7l] to have phase Bound
Jun 17 00:56:05.567: INFO: PersistentVolumeClaim csi-hostpathbqh7l found but phase is Pending instead of Bound.
Jun 17 00:56:07.713: INFO: PersistentVolumeClaim csi-hostpathbqh7l found and phase=Bound (2.290339799s)
STEP: Expanding non-expandable pvc
Jun 17 00:56:08.006: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jun 17 00:56:08.304: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:10.595: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:12.594: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:14.595: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:16.596: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:18.595: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:20.595: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:22.606: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:24.612: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:26.595: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:28.604: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:30.596: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:32.595: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:34.596: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:36.596: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:38.600: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jun 17 00:56:38.897: INFO: Error updating pvc csi-hostpathbqh7l: persistentvolumeclaims "csi-hostpathbqh7l" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jun 17 00:56:38.897: INFO: Deleting PersistentVolumeClaim "csi-hostpathbqh7l"
Jun 17 00:56:39.044: INFO: Waiting up to 5m0s for PersistentVolume pvc-012ebbca-6727-47b5-890b-14317184dd26 to get deleted
Jun 17 00:56:39.189: INFO: PersistentVolume pvc-012ebbca-6727-47b5-890b-14317184dd26 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-1143
... skipping 68 lines ...
• [SLOW TEST:7.184 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":9,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:19.830: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:20.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":10,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:20.297: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 102 lines ...
• [SLOW TEST:26.838 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":12,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:20.536: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Jun 17 00:57:12.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 17 00:57:13.401: INFO: Waiting up to 5m0s for pod "security-context-af337721-cab4-4ddc-ad28-6c83833cebee" in namespace "security-context-6117" to be "Succeeded or Failed"
Jun 17 00:57:13.545: INFO: Pod "security-context-af337721-cab4-4ddc-ad28-6c83833cebee": Phase="Pending", Reason="", readiness=false. Elapsed: 143.695811ms
Jun 17 00:57:15.705: INFO: Pod "security-context-af337721-cab4-4ddc-ad28-6c83833cebee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303697067s
Jun 17 00:57:17.849: INFO: Pod "security-context-af337721-cab4-4ddc-ad28-6c83833cebee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448027194s
Jun 17 00:57:19.994: INFO: Pod "security-context-af337721-cab4-4ddc-ad28-6c83833cebee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.592647372s
STEP: Saw pod success
Jun 17 00:57:19.994: INFO: Pod "security-context-af337721-cab4-4ddc-ad28-6c83833cebee" satisfied condition "Succeeded or Failed"
Jun 17 00:57:20.138: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod security-context-af337721-cab4-4ddc-ad28-6c83833cebee container test-container: <nil>
STEP: delete the pod
Jun 17 00:57:20.759: INFO: Waiting for pod security-context-af337721-cab4-4ddc-ad28-6c83833cebee to disappear
Jun 17 00:57:20.903: INFO: Pod security-context-af337721-cab4-4ddc-ad28-6c83833cebee no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.668 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":11,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:21.218: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 52 lines ...
• [SLOW TEST:10.972 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":10,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:25.288: INFO: Only supported for providers [gce gke] (not aws)
... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":9,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:26.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2662" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":11,"skipped":45,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
Jun 17 00:56:43.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Jun 17 00:56:44.692: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 00:56:44.991: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-5354" in namespace "volume-5354" to be "Succeeded or Failed"
Jun 17 00:56:45.137: INFO: Pod "hostpath-symlink-prep-volume-5354": Phase="Pending", Reason="", readiness=false. Elapsed: 146.47528ms
Jun 17 00:56:47.364: INFO: Pod "hostpath-symlink-prep-volume-5354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.373569704s
Jun 17 00:56:49.511: INFO: Pod "hostpath-symlink-prep-volume-5354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.520029594s
STEP: Saw pod success
Jun 17 00:56:49.511: INFO: Pod "hostpath-symlink-prep-volume-5354" satisfied condition "Succeeded or Failed"
Jun 17 00:56:49.511: INFO: Deleting pod "hostpath-symlink-prep-volume-5354" in namespace "volume-5354"
Jun 17 00:56:49.661: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-5354" to be fully deleted
Jun 17 00:56:49.826: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Jun 17 00:56:52.284: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-5354 exec hostpathsymlink-injector --namespace=volume-5354 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-5354' > /opt/0/index.html'
... skipping 38 lines ...
Jun 17 00:57:19.976: INFO: Pod hostpathsymlink-client still exists
Jun 17 00:57:21.830: INFO: Waiting for pod hostpathsymlink-client to disappear
Jun 17 00:57:21.975: INFO: Pod hostpathsymlink-client still exists
Jun 17 00:57:23.832: INFO: Waiting for pod hostpathsymlink-client to disappear
Jun 17 00:57:23.977: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Jun 17 00:57:24.127: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-5354" in namespace "volume-5354" to be "Succeeded or Failed"
Jun 17 00:57:24.272: INFO: Pod "hostpath-symlink-prep-volume-5354": Phase="Pending", Reason="", readiness=false. Elapsed: 144.630014ms
Jun 17 00:57:26.416: INFO: Pod "hostpath-symlink-prep-volume-5354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289294114s
Jun 17 00:57:28.562: INFO: Pod "hostpath-symlink-prep-volume-5354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435279374s
STEP: Saw pod success
Jun 17 00:57:28.563: INFO: Pod "hostpath-symlink-prep-volume-5354" satisfied condition "Succeeded or Failed"
Jun 17 00:57:28.563: INFO: Deleting pod "hostpath-symlink-prep-volume-5354" in namespace "volume-5354"
Jun 17 00:57:28.711: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-5354" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:28.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5354" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":13,"skipped":111,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:29.174: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 74 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
Jun 17 00:57:21.973: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 00:57:22.118: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7qdf
STEP: Creating a pod to test subpath
Jun 17 00:57:22.265: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7qdf" in namespace "provisioning-4322" to be "Succeeded or Failed"
Jun 17 00:57:22.412: INFO: Pod "pod-subpath-test-inlinevolume-7qdf": Phase="Pending", Reason="", readiness=false. Elapsed: 146.869315ms
Jun 17 00:57:24.557: INFO: Pod "pod-subpath-test-inlinevolume-7qdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292022649s
Jun 17 00:57:26.701: INFO: Pod "pod-subpath-test-inlinevolume-7qdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436450283s
Jun 17 00:57:28.846: INFO: Pod "pod-subpath-test-inlinevolume-7qdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.581425534s
STEP: Saw pod success
Jun 17 00:57:28.846: INFO: Pod "pod-subpath-test-inlinevolume-7qdf" satisfied condition "Succeeded or Failed"
Jun 17 00:57:28.990: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-7qdf container test-container-subpath-inlinevolume-7qdf: <nil>
STEP: delete the pod
Jun 17 00:57:29.293: INFO: Waiting for pod pod-subpath-test-inlinevolume-7qdf to disappear
Jun 17 00:57:29.437: INFO: Pod pod-subpath-test-inlinevolume-7qdf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-7qdf
Jun 17 00:57:29.437: INFO: Deleting pod "pod-subpath-test-inlinevolume-7qdf" in namespace "provisioning-4322"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":12,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:30.116: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:30.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1430" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":12,"skipped":54,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
Jun 17 00:57:13.496: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [nfsdqpqg] to have phase Bound
Jun 17 00:57:13.640: INFO: PersistentVolumeClaim nfsdqpqg found but phase is Pending instead of Bound.
Jun 17 00:57:15.784: INFO: PersistentVolumeClaim nfsdqpqg found but phase is Pending instead of Bound.
Jun 17 00:57:17.929: INFO: PersistentVolumeClaim nfsdqpqg found and phase=Bound (4.433135455s)
STEP: Creating pod exec-volume-test-dynamicpv-2b8x
STEP: Creating a pod to test exec-volume-test
Jun 17 00:57:18.360: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-2b8x" in namespace "volume-3298" to be "Succeeded or Failed"
Jun 17 00:57:18.510: INFO: Pod "exec-volume-test-dynamicpv-2b8x": Phase="Pending", Reason="", readiness=false. Elapsed: 150.080632ms
Jun 17 00:57:20.660: INFO: Pod "exec-volume-test-dynamicpv-2b8x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300055482s
Jun 17 00:57:22.805: INFO: Pod "exec-volume-test-dynamicpv-2b8x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.444900672s
STEP: Saw pod success
Jun 17 00:57:22.805: INFO: Pod "exec-volume-test-dynamicpv-2b8x" satisfied condition "Succeeded or Failed"
Jun 17 00:57:22.948: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod exec-volume-test-dynamicpv-2b8x container exec-container-dynamicpv-2b8x: <nil>
STEP: delete the pod
Jun 17 00:57:23.295: INFO: Waiting for pod exec-volume-test-dynamicpv-2b8x to disappear
Jun 17 00:57:23.439: INFO: Pod exec-volume-test-dynamicpv-2b8x no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-2b8x
Jun 17 00:57:23.439: INFO: Deleting pod "exec-volume-test-dynamicpv-2b8x" in namespace "volume-3298"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:32.910: INFO: Only supported for providers [vsphere] (not aws)
... skipping 208 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:34.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5486" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":13,"skipped":54,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:34.343: INFO: Only supported for providers [azure] (not aws)
... skipping 67 lines ...
Jun 17 00:57:22.078: INFO: PersistentVolumeClaim pvc-zbgmf found but phase is Pending instead of Bound.
Jun 17 00:57:24.223: INFO: PersistentVolumeClaim pvc-zbgmf found and phase=Bound (8.73162563s)
Jun 17 00:57:24.223: INFO: Waiting up to 3m0s for PersistentVolume local-bl7l6 to have phase Bound
Jun 17 00:57:24.368: INFO: PersistentVolume local-bl7l6 found and phase=Bound (144.68985ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fjd6
STEP: Creating a pod to test subpath
Jun 17 00:57:24.813: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fjd6" in namespace "provisioning-2802" to be "Succeeded or Failed"
Jun 17 00:57:24.959: INFO: Pod "pod-subpath-test-preprovisionedpv-fjd6": Phase="Pending", Reason="", readiness=false. Elapsed: 145.650917ms
Jun 17 00:57:27.105: INFO: Pod "pod-subpath-test-preprovisionedpv-fjd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291458177s
Jun 17 00:57:29.251: INFO: Pod "pod-subpath-test-preprovisionedpv-fjd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437856284s
Jun 17 00:57:31.397: INFO: Pod "pod-subpath-test-preprovisionedpv-fjd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.583949939s
STEP: Saw pod success
Jun 17 00:57:31.397: INFO: Pod "pod-subpath-test-preprovisionedpv-fjd6" satisfied condition "Succeeded or Failed"
Jun 17 00:57:31.542: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-fjd6 container test-container-volume-preprovisionedpv-fjd6: <nil>
STEP: delete the pod
Jun 17 00:57:31.842: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fjd6 to disappear
Jun 17 00:57:31.987: INFO: Pod pod-subpath-test-preprovisionedpv-fjd6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fjd6
Jun 17 00:57:31.987: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fjd6" in namespace "provisioning-2802"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":11,"skipped":66,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:35.925: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-c369e522-083c-49ed-89fe-ec386bf54cd9
STEP: Creating a pod to test consume configMaps
Jun 17 00:57:33.974: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bd6bdcb-9367-437d-8cf2-983473e907e7" in namespace "configmap-8029" to be "Succeeded or Failed"
Jun 17 00:57:34.117: INFO: Pod "pod-configmaps-0bd6bdcb-9367-437d-8cf2-983473e907e7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.163282ms
Jun 17 00:57:36.261: INFO: Pod "pod-configmaps-0bd6bdcb-9367-437d-8cf2-983473e907e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287591792s
STEP: Saw pod success
Jun 17 00:57:36.262: INFO: Pod "pod-configmaps-0bd6bdcb-9367-437d-8cf2-983473e907e7" satisfied condition "Succeeded or Failed"
Jun 17 00:57:36.405: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-configmaps-0bd6bdcb-9367-437d-8cf2-983473e907e7 container agnhost-container: <nil>
STEP: delete the pod
Jun 17 00:57:36.711: INFO: Waiting for pod pod-configmaps-0bd6bdcb-9367-437d-8cf2-983473e907e7 to disappear
Jun 17 00:57:36.855: INFO: Pod pod-configmaps-0bd6bdcb-9367-437d-8cf2-983473e907e7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:36.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8029" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:37.157: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:37.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7925" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":12,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:37.661: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 112 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-0a2d1411-52a6-445b-96b8-dffa070bf85b
STEP: Creating a pod to test consume secrets
Jun 17 00:57:34.608: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-46863360-7c9e-43ba-acc0-5f5b1c65f9cf" in namespace "projected-1107" to be "Succeeded or Failed"
Jun 17 00:57:34.752: INFO: Pod "pod-projected-secrets-46863360-7c9e-43ba-acc0-5f5b1c65f9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 144.388918ms
Jun 17 00:57:36.898: INFO: Pod "pod-projected-secrets-46863360-7c9e-43ba-acc0-5f5b1c65f9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289849623s
Jun 17 00:57:39.043: INFO: Pod "pod-projected-secrets-46863360-7c9e-43ba-acc0-5f5b1c65f9cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43529783s
STEP: Saw pod success
Jun 17 00:57:39.043: INFO: Pod "pod-projected-secrets-46863360-7c9e-43ba-acc0-5f5b1c65f9cf" satisfied condition "Succeeded or Failed"
Jun 17 00:57:39.188: INFO: Trying to get logs from node ip-172-20-46-228.sa-east-1.compute.internal pod pod-projected-secrets-46863360-7c9e-43ba-acc0-5f5b1c65f9cf container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 17 00:57:39.886: INFO: Waiting for pod pod-projected-secrets-46863360-7c9e-43ba-acc0-5f5b1c65f9cf to disappear
Jun 17 00:57:40.030: INFO: Pod pod-projected-secrets-46863360-7c9e-43ba-acc0-5f5b1c65f9cf no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.733 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":15,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:39.693: INFO: >>> kubeConfig: /root/.kube/config
... skipping 123 lines ...
Jun 17 00:57:23.269: INFO: PersistentVolumeClaim pvc-tvm2z found and phase=Bound (10.867928491s)
Jun 17 00:57:23.269: INFO: Waiting up to 3m0s for PersistentVolume nfs-jms5b to have phase Bound
Jun 17 00:57:23.413: INFO: PersistentVolume nfs-jms5b found and phase=Bound (143.581753ms)
STEP: Checking pod has write access to PersistentVolume
Jun 17 00:57:23.700: INFO: Creating nfs test pod
Jun 17 00:57:23.845: INFO: Pod should terminate with exitcode 0 (success)
Jun 17 00:57:23.845: INFO: Waiting up to 5m0s for pod "pvc-tester-j6m46" in namespace "pv-5736" to be "Succeeded or Failed"
Jun 17 00:57:23.990: INFO: Pod "pvc-tester-j6m46": Phase="Pending", Reason="", readiness=false. Elapsed: 145.367666ms
Jun 17 00:57:26.136: INFO: Pod "pvc-tester-j6m46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290839722s
Jun 17 00:57:28.279: INFO: Pod "pvc-tester-j6m46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434622768s
STEP: Saw pod success
Jun 17 00:57:28.279: INFO: Pod "pvc-tester-j6m46" satisfied condition "Succeeded or Failed"
Jun 17 00:57:28.279: INFO: Pod pvc-tester-j6m46 succeeded 
Jun 17 00:57:28.279: INFO: Deleting pod "pvc-tester-j6m46" in namespace "pv-5736"
Jun 17 00:57:28.425: INFO: Wait up to 5m0s for pod "pvc-tester-j6m46" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jun 17 00:57:28.569: INFO: Deleting PVC pvc-tvm2z to trigger reclamation of PV nfs-jms5b
Jun 17 00:57:28.569: INFO: Deleting PersistentVolumeClaim "pvc-tvm2z"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":5,"skipped":26,"failed":0}
[BeforeEach] [sig-auth] Metadata Concealment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:42.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename metadata-concealment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":14,"skipped":132,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:32.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":15,"skipped":132,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:44.562: INFO: Only supported for providers [gce gke] (not aws)
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:44.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7981" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":6,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:44.649: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79

    Only supported for node OS distro [gci ubuntu custom] (not debian)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:40.180: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jun 17 00:57:40.902: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 00:57:41.049: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-b92p
STEP: Creating a pod to test subpath
Jun 17 00:57:41.196: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-b92p" in namespace "provisioning-7051" to be "Succeeded or Failed"
Jun 17 00:57:41.355: INFO: Pod "pod-subpath-test-inlinevolume-b92p": Phase="Pending", Reason="", readiness=false. Elapsed: 159.066725ms
Jun 17 00:57:43.500: INFO: Pod "pod-subpath-test-inlinevolume-b92p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304411643s
Jun 17 00:57:45.649: INFO: Pod "pod-subpath-test-inlinevolume-b92p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.452623806s
STEP: Saw pod success
Jun 17 00:57:45.649: INFO: Pod "pod-subpath-test-inlinevolume-b92p" satisfied condition "Succeeded or Failed"
Jun 17 00:57:45.806: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-b92p container test-container-subpath-inlinevolume-b92p: <nil>
STEP: delete the pod
Jun 17 00:57:46.103: INFO: Waiting for pod pod-subpath-test-inlinevolume-b92p to disappear
Jun 17 00:57:46.248: INFO: Pod pod-subpath-test-inlinevolume-b92p no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-b92p
Jun 17 00:57:46.248: INFO: Deleting pod "pod-subpath-test-inlinevolume-b92p" in namespace "provisioning-7051"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":11,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:47.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-946" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":7,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:47.469: INFO: Only supported for providers [gce gke] (not aws)
... skipping 47 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Jun 17 00:57:38.588: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-c2a2b416-a3b8-47a4-80e8-15db81d24918" in namespace "security-context-test-7437" to be "Succeeded or Failed"
Jun 17 00:57:38.732: INFO: Pod "alpine-nnp-true-c2a2b416-a3b8-47a4-80e8-15db81d24918": Phase="Pending", Reason="", readiness=false. Elapsed: 144.830754ms
Jun 17 00:57:40.879: INFO: Pod "alpine-nnp-true-c2a2b416-a3b8-47a4-80e8-15db81d24918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290948421s
Jun 17 00:57:43.024: INFO: Pod "alpine-nnp-true-c2a2b416-a3b8-47a4-80e8-15db81d24918": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436260653s
Jun 17 00:57:45.173: INFO: Pod "alpine-nnp-true-c2a2b416-a3b8-47a4-80e8-15db81d24918": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585114692s
Jun 17 00:57:47.318: INFO: Pod "alpine-nnp-true-c2a2b416-a3b8-47a4-80e8-15db81d24918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.730789798s
Jun 17 00:57:47.319: INFO: Pod "alpine-nnp-true-c2a2b416-a3b8-47a4-80e8-15db81d24918" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:47.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7437" for this suite.


... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":13,"skipped":57,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:48.637: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:48.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":14,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:49.114: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 87 lines ...
Jun 17 00:57:36.335: INFO: PersistentVolumeClaim pvc-67s4v found but phase is Pending instead of Bound.
Jun 17 00:57:38.480: INFO: PersistentVolumeClaim pvc-67s4v found and phase=Bound (13.016143552s)
Jun 17 00:57:38.480: INFO: Waiting up to 3m0s for PersistentVolume local-wdn6t to have phase Bound
Jun 17 00:57:38.625: INFO: PersistentVolume local-wdn6t found and phase=Bound (144.444821ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5pbf
STEP: Creating a pod to test subpath
Jun 17 00:57:39.062: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5pbf" in namespace "provisioning-9017" to be "Succeeded or Failed"
Jun 17 00:57:39.207: INFO: Pod "pod-subpath-test-preprovisionedpv-5pbf": Phase="Pending", Reason="", readiness=false. Elapsed: 144.738383ms
Jun 17 00:57:41.355: INFO: Pod "pod-subpath-test-preprovisionedpv-5pbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292459321s
Jun 17 00:57:43.501: INFO: Pod "pod-subpath-test-preprovisionedpv-5pbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43836116s
Jun 17 00:57:45.649: INFO: Pod "pod-subpath-test-preprovisionedpv-5pbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.586107925s
STEP: Saw pod success
Jun 17 00:57:45.649: INFO: Pod "pod-subpath-test-preprovisionedpv-5pbf" satisfied condition "Succeeded or Failed"
Jun 17 00:57:45.803: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-5pbf container test-container-volume-preprovisionedpv-5pbf: <nil>
STEP: delete the pod
Jun 17 00:57:46.104: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5pbf to disappear
Jun 17 00:57:46.248: INFO: Pod pod-subpath-test-preprovisionedpv-5pbf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5pbf
Jun 17 00:57:46.248: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5pbf" in namespace "provisioning-9017"
... skipping 24 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":42,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:49.197: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 57 lines ...
• [SLOW TEST:9.029 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":16,"skipped":95,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:49.928: INFO: Only supported for providers [openstack] (not aws)
... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":8,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:50.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7624" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":17,"skipped":102,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:51.270: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver "nfs" does not support FsGroup - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":13,"skipped":84,"failed":0}
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:27.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 28 lines ...
• [SLOW TEST:24.555 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":14,"skipped":84,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:52.590: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 56 lines ...
STEP: Destroying namespace "apply-1553" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":9,"skipped":36,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:52.821: INFO: Driver "nfs" does not support FsGroup - skipping
... skipping 84 lines ...
STEP: Destroying namespace "apply-5537" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":18,"skipped":105,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":12,"skipped":110,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:53.792: INFO: Only supported for providers [azure] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0617 00:52:53.226108    4937 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jun 17 00:57:53.514: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:57:53.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3844" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:53.866: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 253 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":9,"skipped":45,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":13,"skipped":131,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":10,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:59.175: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
• [SLOW TEST:6.810 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":97,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:57:59.480: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:02.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9679" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":14,"skipped":135,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun 17 00:58:00.103: INFO: Waiting up to 5m0s for pod "pod-bbb2499a-62c9-450e-b8e8-ac64b5223187" in namespace "emptydir-5047" to be "Succeeded or Failed"
Jun 17 00:58:00.246: INFO: Pod "pod-bbb2499a-62c9-450e-b8e8-ac64b5223187": Phase="Pending", Reason="", readiness=false. Elapsed: 143.310643ms
Jun 17 00:58:02.390: INFO: Pod "pod-bbb2499a-62c9-450e-b8e8-ac64b5223187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287542586s
STEP: Saw pod success
Jun 17 00:58:02.390: INFO: Pod "pod-bbb2499a-62c9-450e-b8e8-ac64b5223187" satisfied condition "Succeeded or Failed"
Jun 17 00:58:02.534: INFO: Trying to get logs from node ip-172-20-60-41.sa-east-1.compute.internal pod pod-bbb2499a-62c9-450e-b8e8-ac64b5223187 container test-container: <nil>
STEP: delete the pod
Jun 17 00:58:02.831: INFO: Waiting for pod pod-bbb2499a-62c9-450e-b8e8-ac64b5223187 to disappear
Jun 17 00:58:02.974: INFO: Pod pod-bbb2499a-62c9-450e-b8e8-ac64b5223187 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:02.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5047" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":24,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:19.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
Jun 17 00:57:31.245: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fb2pm] to have phase Bound
Jun 17 00:57:31.389: INFO: PersistentVolumeClaim pvc-fb2pm found and phase=Bound (144.393547ms)
STEP: Deleting the previously created pod
Jun 17 00:57:38.115: INFO: Deleting pod "pvc-volume-tester-kjgbw" in namespace "csi-mock-volumes-7463"
Jun 17 00:57:38.260: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kjgbw" to be fully deleted
STEP: Checking CSI driver logs
Jun 17 00:57:46.713: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/110919ca-5d9d-483d-8b45-f69811b81dc3/volumes/kubernetes.io~csi/pvc-fea6296d-44f2-4b64-af88-f75cebcc1e3d/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-kjgbw
Jun 17 00:57:46.714: INFO: Deleting pod "pvc-volume-tester-kjgbw" in namespace "csi-mock-volumes-7463"
STEP: Deleting claim pvc-fb2pm
Jun 17 00:57:47.151: INFO: Waiting up to 2m0s for PersistentVolume pvc-fea6296d-44f2-4b64-af88-f75cebcc1e3d to get deleted
Jun 17 00:57:47.295: INFO: PersistentVolume pvc-fea6296d-44f2-4b64-af88-f75cebcc1e3d was removed
STEP: Deleting storageclass csi-mock-volumes-7463-scqccr7
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374
    token should not be plumbed down when csiServiceAccountTokenEnabled=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":8,"skipped":24,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":11,"skipped":49,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:58:03.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:06.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5595" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":49,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 61 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should not deadlock when a pod's predecessor fails
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:250
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":10,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:09.805: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 46 lines ...
Jun 17 00:58:07.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Jun 17 00:58:07.974: INFO: Waiting up to 5m0s for pod "security-context-aa44e7f5-4106-489f-8475-958bb353b84c" in namespace "security-context-5149" to be "Succeeded or Failed"
Jun 17 00:58:08.118: INFO: Pod "security-context-aa44e7f5-4106-489f-8475-958bb353b84c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.501346ms
Jun 17 00:58:10.263: INFO: Pod "security-context-aa44e7f5-4106-489f-8475-958bb353b84c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288557087s
STEP: Saw pod success
Jun 17 00:58:10.263: INFO: Pod "security-context-aa44e7f5-4106-489f-8475-958bb353b84c" satisfied condition "Succeeded or Failed"
Jun 17 00:58:10.407: INFO: Trying to get logs from node ip-172-20-55-34.sa-east-1.compute.internal pod security-context-aa44e7f5-4106-489f-8475-958bb353b84c container test-container: <nil>
STEP: delete the pod
Jun 17 00:58:10.700: INFO: Waiting for pod security-context-aa44e7f5-4106-489f-8475-958bb353b84c to disappear
Jun 17 00:58:10.843: INFO: Pod security-context-aa44e7f5-4106-489f-8475-958bb353b84c no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:10.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-5149" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":13,"skipped":58,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:11.162: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:11.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6582" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":11,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:11.338: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 70 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47
    should be mountable
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":16,"skipped":137,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":82,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:12.133: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":95,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 00:57:40.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 71 lines ...
• [SLOW TEST:21.941 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":19,"skipped":125,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:15.865: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Jun 17 00:58:11.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Jun 17 00:58:12.755: INFO: Waiting up to 5m0s for pod "var-expansion-308ce217-5b89-4a88-9fb9-41999abe04de" in namespace "var-expansion-7183" to be "Succeeded or Failed"
Jun 17 00:58:12.900: INFO: Pod "var-expansion-308ce217-5b89-4a88-9fb9-41999abe04de": Phase="Pending", Reason="", readiness=false. Elapsed: 145.104188ms
Jun 17 00:58:15.045: INFO: Pod "var-expansion-308ce217-5b89-4a88-9fb9-41999abe04de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.290224332s
STEP: Saw pod success
Jun 17 00:58:15.045: INFO: Pod "var-expansion-308ce217-5b89-4a88-9fb9-41999abe04de" satisfied condition "Succeeded or Failed"
Jun 17 00:58:15.190: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod var-expansion-308ce217-5b89-4a88-9fb9-41999abe04de container dapi-container: <nil>
STEP: delete the pod
Jun 17 00:58:15.485: INFO: Waiting for pod var-expansion-308ce217-5b89-4a88-9fb9-41999abe04de to disappear
Jun 17 00:58:15.630: INFO: Pod var-expansion-308ce217-5b89-4a88-9fb9-41999abe04de no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:15.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7183" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":17,"skipped":139,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:15.934: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 120 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:17.372: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 392 lines ...
• [SLOW TEST:81.096 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":6,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:17.466: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:17.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-7563" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":20,"skipped":132,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:17.965: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 157 lines ...
Jun 17 00:58:16.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 17 00:58:16.936: INFO: Waiting up to 5m0s for pod "pod-daf45014-93a6-49b9-bdd2-94bc0de213dd" in namespace "emptydir-8114" to be "Succeeded or Failed"
Jun 17 00:58:17.080: INFO: Pod "pod-daf45014-93a6-49b9-bdd2-94bc0de213dd": Phase="Pending", Reason="", readiness=false. Elapsed: 144.190247ms
Jun 17 00:58:19.227: INFO: Pod "pod-daf45014-93a6-49b9-bdd2-94bc0de213dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.290699117s
STEP: Saw pod success
Jun 17 00:58:19.227: INFO: Pod "pod-daf45014-93a6-49b9-bdd2-94bc0de213dd" satisfied condition "Succeeded or Failed"
Jun 17 00:58:19.375: INFO: Trying to get logs from node ip-172-20-48-221.sa-east-1.compute.internal pod pod-daf45014-93a6-49b9-bdd2-94bc0de213dd container test-container: <nil>
STEP: delete the pod
Jun 17 00:58:19.671: INFO: Waiting for pod pod-daf45014-93a6-49b9-bdd2-94bc0de213dd to disappear
Jun 17 00:58:19.816: INFO: Pod pod-daf45014-93a6-49b9-bdd2-94bc0de213dd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:19.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8114" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":150,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Jun 17 00:57:49.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
Jun 17 00:57:49.947: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun 17 00:57:50.238: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-857" in namespace "provisioning-857" to be "Succeeded or Failed"
Jun 17 00:57:50.382: INFO: Pod "hostpath-symlink-prep-provisioning-857": Phase="Pending", Reason="", readiness=false. Elapsed: 144.036997ms
Jun 17 00:57:52.527: INFO: Pod "hostpath-symlink-prep-provisioning-857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288904715s
STEP: Saw pod success
Jun 17 00:57:52.527: INFO: Pod "hostpath-symlink-prep-provisioning-857" satisfied condition "Succeeded or Failed"
Jun 17 00:57:52.527: INFO: Deleting pod "hostpath-symlink-prep-provisioning-857" in namespace "provisioning-857"
Jun 17 00:57:52.679: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-857" to be fully deleted
Jun 17 00:57:52.824: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-thck
Jun 17 00:57:55.263: INFO: Running '/tmp/kubectl3756681389/kubectl --server=https://api.e2e-bf5376b553-82074.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-857 exec pod-subpath-test-inlinevolume-thck --container test-container-volume-inlinevolume-thck -- /bin/sh -c rm -r /test-volume/provisioning-857'
Jun 17 00:57:57.017: INFO: stderr: ""
Jun 17 00:57:57.017: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-thck
Jun 17 00:57:57.017: INFO: Deleting pod "pod-subpath-test-inlinevolume-thck" in namespace "provisioning-857"
Jun 17 00:57:57.162: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-thck" to be fully deleted
STEP: Deleting pod
Jun 17 00:58:13.453: INFO: Deleting pod "pod-subpath-test-inlinevolume-thck" in namespace "provisioning-857"
Jun 17 00:58:13.741: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-857" in namespace "provisioning-857" to be "Succeeded or Failed"
Jun 17 00:58:13.886: INFO: Pod "hostpath-symlink-prep-provisioning-857": Phase="Pending", Reason="", readiness=false. Elapsed: 144.298947ms
Jun 17 00:58:16.034: INFO: Pod "hostpath-symlink-prep-provisioning-857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293082467s
Jun 17 00:58:18.180: INFO: Pod "hostpath-symlink-prep-provisioning-857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438365299s
Jun 17 00:58:20.326: INFO: Pod "hostpath-symlink-prep-provisioning-857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.584494458s
STEP: Saw pod success
Jun 17 00:58:20.326: INFO: Pod "hostpath-symlink-prep-provisioning-857" satisfied condition "Succeeded or Failed"
Jun 17 00:58:20.326: INFO: Deleting pod "hostpath-symlink-prep-provisioning-857" in namespace "provisioning-857"
Jun 17 00:58:20.473: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-857" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:20.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-857" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":50,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:21.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8027" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":7,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:22.047: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 70 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 00:58:22.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":8,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jun 17 00:58:22.622: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 27246 lines ...






 service.go:446] Removing service port \"services-6281/externalsvc\"\nI0617 01:01:43.748710       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:43.780677       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.036379ms\"\nI0617 01:01:43.780769       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:43.813326       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.607511ms\"\nI0617 01:01:46.624798       1 service.go:306] Service services-6222/nodeport-update-service updated: 2 ports\nI0617 01:01:46.624845       1 service.go:423] Updating existing service port \"services-6222/nodeport-update-service:tcp-port\" at 100.64.216.77:80/TCP\nI0617 01:01:46.624862       1 service.go:421] Adding new service port \"services-6222/nodeport-update-service:udp-port\" at 100.64.216.77:80/UDP\nI0617 01:01:46.624936       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:46.658568       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-6222/nodeport-update-service:udp-port\\\" (:31996/udp4)\"\nI0617 01:01:46.658651       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-6222/nodeport-update-service:tcp-port\\\" (:31038/tcp4)\"\nI0617 01:01:46.664415       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.565592ms\"\nI0617 01:01:46.664645       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"services-6222/nodeport-update-service:udp-port\" clusterIP=\"100.64.216.77\"\nI0617 01:01:46.664719       1 proxier.go:848] Stale udp service NodePort services-6222/nodeport-update-service:udp-port -> 31996\nI0617 01:01:46.664748       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:46.712175       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.722066ms\"\nI0617 01:01:51.158136       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-attacher updated: 1 ports\nI0617 01:01:51.158187       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpath-attacher:dummy\" at 100.67.88.27:12345/TCP\nI0617 01:01:51.158264       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:51.191221       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.031778ms\"\nI0617 01:01:51.191336       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:51.225314       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.050111ms\"\nI0617 01:01:51.596669       1 service.go:306] Service volume-expand-7844-2168/csi-hostpathplugin updated: 1 ports\nI0617 01:01:51.896623       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-provisioner updated: 1 ports\nI0617 01:01:51.921046       1 service.go:306] Service volume-9937-5615/csi-hostpath-attacher updated: 1 ports\nI0617 01:01:52.194358       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-resizer updated: 1 ports\nI0617 01:01:52.194409       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpathplugin:dummy\" at 100.71.146.126:12345/TCP\nI0617 01:01:52.194429       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpath-provisioner:dummy\" at 100.64.170.2:12345/TCP\nI0617 01:01:52.194441       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpath-attacher:dummy\" at 100.65.125.217:12345/TCP\nI0617 01:01:52.194451       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpath-resizer:dummy\" at 100.69.245.49:12345/TCP\nI0617 01:01:52.194525       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:52.229781       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.371268ms\"\nI0617 01:01:52.372478       1 service.go:306] Service volume-9937-5615/csi-hostpathplugin updated: 1 ports\nI0617 01:01:52.454501       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-attacher updated: 1 ports\nI0617 01:01:52.498143       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:01:52.684898       1 service.go:306] Service volume-9937-5615/csi-hostpath-provisioner updated: 1 ports\nI0617 01:01:52.901429       1 service.go:306] Service ephemeral-9221-7051/csi-hostpathplugin updated: 1 ports\nI0617 01:01:52.993999       1 service.go:306] Service volume-9937-5615/csi-hostpath-resizer updated: 1 ports\nI0617 01:01:53.200472       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-provisioner updated: 1 ports\nI0617 01:01:53.200544       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpath-provisioner:dummy\" at 100.70.232.15:12345/TCP\nI0617 01:01:53.200562       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpathplugin:dummy\" at 100.69.236.226:12345/TCP\nI0617 01:01:53.200575       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpath-resizer:dummy\" at 100.68.31.212:12345/TCP\nI0617 01:01:53.200587       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpath-provisioner:dummy\" at 100.69.167.252:12345/TCP\nI0617 01:01:53.200599       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpathplugin:dummy\" at 100.69.51.0:12345/TCP\nI0617 01:01:53.200609       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpath-attacher:dummy\" at 100.69.132.22:12345/TCP\nI0617 01:01:53.200619       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpath-snapshotter:dummy\" at 100.67.232.1:12345/TCP\nI0617 01:01:53.200707       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:53.262460       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.924852ms\"\nI0617 01:01:53.285668       1 service.go:306] Service volume-9937-5615/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:01:53.591032       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-resizer updated: 1 ports\nI0617 01:01:53.885430       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:01:54.262590       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpath-resizer:dummy\" at 100.65.78.170:12345/TCP\nI0617 01:01:54.262621       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpath-snapshotter:dummy\" at 100.71.65.253:12345/TCP\nI0617 01:01:54.262645       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpath-snapshotter:dummy\" at 100.70.70.28:12345/TCP\nI0617 01:01:54.262815       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:54.311523       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.937496ms\"\nI0617 01:01:55.077842       1 service.go:306] Service services-4141/up-down-3 updated: 1 ports\nI0617 01:01:55.311671       1 service.go:421] Adding new service port \"services-4141/up-down-3\" at 100.70.237.169:80/TCP\nI0617 01:01:55.311844       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:55.345366       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.715888ms\"\nI0617 01:01:56.346145       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:56.380480       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.4278ms\"\nI0617 01:01:59.277980       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:59.313882       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.963137ms\"\nI0617 01:01:59.643120       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:59.683501       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.441205ms\"\nI0617 01:02:00.683752       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:00.723636       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.99405ms\"\nI0617 01:02:01.723990       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:01.762628       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.717759ms\"\nI0617 01:02:02.640496       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:02.678637       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.227009ms\"\nI0617 01:02:03.642124       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:03.678515       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.487214ms\"\nI0617 01:02:04.447622       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:04.550648       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"103.076901ms\"\nI0617 01:02:06.239645       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:06.286543       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.959838ms\"\nI0617 01:02:26.646274       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-attacher updated: 1 ports\nI0617 01:02:26.646323       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpath-attacher:dummy\" at 100.66.3.103:12345/TCP\nI0617 01:02:26.646411       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:26.693766       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.434601ms\"\nI0617 01:02:26.693872       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:26.729428       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.62263ms\"\nI0617 01:02:27.085293       1 service.go:306] Service volume-expand-3709-2314/csi-hostpathplugin updated: 1 ports\nI0617 01:02:27.379936       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-provisioner updated: 1 ports\nI0617 01:02:27.673131       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-resizer updated: 1 ports\nI0617 01:02:27.673201       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpathplugin:dummy\" at 100.70.221.112:12345/TCP\nI0617 01:02:27.673221       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpath-provisioner:dummy\" at 100.65.82.37:12345/TCP\nI0617 01:02:27.673236       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpath-resizer:dummy\" at 100.68.145.99:12345/TCP\nI0617 01:02:27.673323       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:27.712992       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.806096ms\"\nI0617 01:02:27.966945       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:02:28.713297       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpath-snapshotter:dummy\" at 100.66.134.75:12345/TCP\nI0617 01:02:28.713434       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:28.751446       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.183019ms\"\nI0617 01:02:32.832800       1 service.go:306] Service services-6222/nodeport-update-service updated: 0 ports\nI0617 01:02:32.832835       1 service.go:446] Removing service port \"services-6222/nodeport-update-service:udp-port\"\nI0617 01:02:32.832994       1 service.go:446] Removing service port \"services-6222/nodeport-update-service:tcp-port\"\nI0617 01:02:32.833091       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:32.873019       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.176407ms\"\nI0617 01:02:32.873290       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:32.907937       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.876174ms\"\nI0617 01:02:34.275642       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:34.403554       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"124.211342ms\"\nI0617 01:02:34.941486       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:34.977888       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.466477ms\"\nI0617 01:02:35.529232       1 service.go:306] Service services-4141/up-down-2 updated: 0 ports\nI0617 01:02:35.539371       1 service.go:306] Service services-4141/up-down-3 updated: 0 ports\nI0617 01:02:35.978343       1 service.go:446] Removing service port \"services-4141/up-down-2\"\nI0617 01:02:35.978394       1 service.go:446] Removing service port \"services-4141/up-down-3\"\nI0617 01:02:35.978530       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:36.034291       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.90936ms\"\nI0617 01:02:37.034618       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:37.122657       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.1931ms\"\nI0617 01:02:38.123783       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:38.164989       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.321869ms\"\nI0617 01:02:50.387841       1 service.go:306] Service volume-9937-5615/csi-hostpath-attacher updated: 0 ports\nI0617 01:02:50.387887       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpath-attacher:dummy\"\nI0617 01:02:50.387976       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:50.424106       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.207147ms\"\nI0617 01:02:50.424223       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:50.459782       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.63412ms\"\nI0617 01:02:50.837211       1 service.go:306] Service volume-9937-5615/csi-hostpathplugin updated: 0 ports\nI0617 01:02:51.133487       1 service.go:306] Service volume-9937-5615/csi-hostpath-provisioner updated: 0 ports\nI0617 01:02:51.428297       1 service.go:306] Service volume-9937-5615/csi-hostpath-resizer updated: 0 ports\nI0617 01:02:51.428348       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpathplugin:dummy\"\nI0617 01:02:51.428366       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpath-provisioner:dummy\"\nI0617 01:02:51.428374       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpath-resizer:dummy\"\nI0617 01:02:51.428498       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:51.476354       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.989767ms\"\nI0617 01:02:51.739526       1 service.go:306] Service volume-9937-5615/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:02:52.476505       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpath-snapshotter:dummy\"\nI0617 01:02:52.476670       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:52.544959       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.455753ms\"\nI0617 01:02:54.038510       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-attacher updated: 0 ports\nI0617 01:02:54.038545       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpath-attacher:dummy\"\nI0617 01:02:54.038618       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:54.074494       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.935151ms\"\nI0617 01:02:54.511908       1 service.go:306] Service ephemeral-9221-7051/csi-hostpathplugin updated: 0 ports\nI0617 01:02:54.511952       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpathplugin:dummy\"\nI0617 01:02:54.512044       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:54.580188       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.220936ms\"\nI0617 01:02:54.817862       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-provisioner updated: 0 ports\nI0617 01:02:55.117035       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-resizer updated: 0 ports\nI0617 01:02:55.425235       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:02:55.425274       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpath-provisioner:dummy\"\nI0617 01:02:55.425288       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpath-resizer:dummy\"\nI0617 01:02:55.425296       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpath-snapshotter:dummy\"\nI0617 01:02:55.425409       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:55.472978       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.688336ms\"\nI0617 01:02:56.473343       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:56.512144       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.893301ms\"\nI0617 01:03:11.797125       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-attacher updated: 0 ports\nI0617 01:03:11.797163       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpath-attacher:dummy\"\nI0617 01:03:11.797255       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:11.831827       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.654695ms\"\nI0617 01:03:11.838234       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:11.875483       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.293683ms\"\nI0617 01:03:12.265277       1 service.go:306] Service volume-expand-3709-2314/csi-hostpathplugin updated: 0 ports\nI0617 01:03:12.568780       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-provisioner updated: 0 ports\nI0617 01:03:12.876220       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpathplugin:dummy\"\nI0617 01:03:12.876260       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpath-provisioner:dummy\"\nI0617 01:03:12.876412       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:12.895622       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-resizer updated: 0 ports\nI0617 01:03:12.912527       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.362137ms\"\nI0617 01:03:13.195313       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:03:13.914257       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpath-resizer:dummy\"\nI0617 01:03:13.914307       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpath-snapshotter:dummy\"\nI0617 01:03:13.914402       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:14.017144       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"102.883606ms\"\nI0617 01:03:29.364171       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:29.395589       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.46262ms\"\nI0617 01:03:30.605317       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:30.646647       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.399517ms\"\nI0617 01:03:31.919170       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:31.950492       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.380989ms\"\nI0617 01:03:35.110720       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-attacher updated: 1 ports\nI0617 01:03:35.110771       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpath-attacher:dummy\" at 100.67.155.101:12345/TCP\nI0617 01:03:35.110850       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:35.145992       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.21667ms\"\nI0617 01:03:35.146083       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:35.191316       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.280087ms\"\nI0617 01:03:35.549419       1 service.go:306] Service ephemeral-9020-469/csi-hostpathplugin updated: 1 ports\nI0617 01:03:35.843651       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-provisioner updated: 1 ports\nI0617 01:03:36.137437       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-resizer updated: 1 ports\nI0617 01:03:36.137487       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpathplugin:dummy\" at 100.65.179.174:12345/TCP\nI0617 01:03:36.137508       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpath-provisioner:dummy\" at 100.70.82.246:12345/TCP\nI0617 01:03:36.137518       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpath-resizer:dummy\" at 100.65.222.152:12345/TCP\nI0617 01:03:36.137638       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:36.293886       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"156.393159ms\"\nI0617 01:03:36.433111       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:03:37.147832       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpath-snapshotter:dummy\" at 100.65.50.74:12345/TCP\nI0617 01:03:37.147966       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:37.200618       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.799187ms\"\nI0617 01:03:38.201664       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:38.235650       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.07515ms\"\nI0617 01:03:38.953523       1 service.go:306] Service services-3389/service-headless-toggled updated: 1 ports\nI0617 01:03:39.235802       1 service.go:421] Adding new service port \"services-3389/service-headless-toggled\" at 100.71.171.135:80/TCP\nI0617 01:03:39.235967       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:39.269441       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.656574ms\"\nI0617 01:03:40.092893       1 service.go:306] Service conntrack-2223/svc-udp updated: 1 ports\nI0617 01:03:40.269934       1 service.go:421] Adding new service port \"conntrack-2223/svc-udp:udp\" at 100.67.253.84:80/UDP\nI0617 01:03:40.270100       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:40.304879       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.98567ms\"\nI0617 01:03:47.536046       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:47.592630       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.64138ms\"\nI0617 01:03:49.397143       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:49.434084       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.030022ms\"\nI0617 01:03:49.685946       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-2223/svc-udp:udp\" clusterIP=\"100.67.253.84\"\nI0617 01:03:49.685985       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:49.754839       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.061701ms\"\nI0617 01:03:50.998758       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:51.032833       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.156457ms\"\nI0617 01:03:55.336203       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-attacher updated: 0 ports\nI0617 01:03:55.336250       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpath-attacher:dummy\"\nI0617 01:03:55.336341       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:55.369957       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.696849ms\"\nI0617 01:03:55.370130       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:55.403620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.623911ms\"\nI0617 01:03:55.788203       1 service.go:306] Service volume-expand-7844-2168/csi-hostpathplugin updated: 0 ports\nI0617 01:03:56.093162       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-provisioner updated: 0 ports\nI0617 01:03:56.388937       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-resizer updated: 0 ports\nI0617 01:03:56.388986       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpathplugin:dummy\"\nI0617 01:03:56.389002       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpath-provisioner:dummy\"\nI0617 01:03:56.389010       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpath-resizer:dummy\"\nI0617 01:03:56.389139       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:56.425015       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.020086ms\"\nI0617 01:03:56.697111       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:03:57.425695       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpath-snapshotter:dummy\"\nI0617 01:03:57.425873       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:57.470373       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.679598ms\"\nI0617 01:03:59.394174       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:59.427863       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.755615ms\"\nI0617 01:04:00.021368       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:00.081542       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.269712ms\"\nI0617 01:04:08.380397       1 service.go:306] Service dns-1609/test-service-2 updated: 1 ports\nI0617 01:04:08.380444       1 service.go:421] Adding new service port \"dns-1609/test-service-2:http\" at 100.70.254.202:80/TCP\nI0617 01:04:08.380537       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:08.416329       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.882333ms\"\nI0617 01:04:08.416429       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:08.450150       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.782557ms\"\nI0617 01:04:10.656707       1 service.go:306] Service services-3389/service-headless-toggled updated: 0 ports\nI0617 01:04:10.656753       1 service.go:446] Removing service port \"services-3389/service-headless-toggled\"\nI0617 01:04:10.656837       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:10.691563       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.801287ms\"\nI0617 01:04:10.691786       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:10.735571       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.968157ms\"\nI0617 01:04:16.400883       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:16.460577       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.80541ms\"\nI0617 01:04:16.531798       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:16.564147       1 service.go:306] Service conntrack-2223/svc-udp updated: 0 ports\nI0617 01:04:16.580816       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.069016ms\"\nI0617 01:04:17.581263       1 service.go:446] Removing service port \"conntrack-2223/svc-udp:udp\"\nI0617 01:04:17.581379       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:17.617939       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.681297ms\"\nI0617 01:04:21.140671       1 service.go:306] Service services-3389/service-headless-toggled updated: 1 ports\nI0617 01:04:21.140717       1 service.go:421] Adding new service port \"services-3389/service-headless-toggled\" at 100.71.171.135:80/TCP\nI0617 01:04:21.140797       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:21.188122       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.39386ms\"\nI0617 01:04:34.352316       1 service.go:306] Service ephemeral-7804-8814/csi-hostpath-attacher updated: 1 ports\nI0617 01:04:34.352364       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpath-attacher:dummy\" at 100.67.190.52:12345/TCP\nI0617 01:04:34.352449       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:34.398957       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.58799ms\"\nI0617 01:04:34.399125       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:34.442798       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.792572ms\"\nI0617 01:04:34.791488       1 service.go:306] Service ephemeral-7804-8814/csi-hostpathplugin updated: 1 ports\nI0617 01:04:35.091739       1 service.go:306] Service ephemeral-7804-8814/csi-hostpath-provisioner updated: 1 ports\nI0617 01:04:35.384518       1 service.go:306] Service ephemeral-7804-8814/csi-hostpath-resizer updated: 1 ports\nI0617 01:04:35.384563       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpathplugin:dummy\" at 100.64.107.87:12345/TCP\nI0617 01:04:35.384581       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpath-provisioner:dummy\" at 100.67.225.38:12345/TCP\nI0617 01:04:35.384592       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpath-resizer:dummy\" at 100.66.82.142:12345/TCP\nI0617 01:04:35.384676       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:35.435233       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.661948ms\"\nI0617 01:04:35.687623       1 service.go:306] Service ephemeral-7804-8814/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:04:36.435463       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpath-snapshotter:dummy\" at 100.65.91.90:12345/TCP\nI0617 01:04:36.435567       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:36.470959       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.522239ms\"\nI0617 01:04:37.471320       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:37.504150       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.950176ms\"\nI0617 01:04:38.504935       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:38.584163       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.577857ms\"\nI0617 01:04:39.584810       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:39.629408       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.701743ms\"\nI0617 01:04:41.398385       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-attacher updated: 0 ports\nI0617 01:04:41.398429       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpath-attacher:dummy\"\nI0617 01:04:41.398528       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:41.450762       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.318813ms\"\nI0617 01:04:41.450867       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:41.500942       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.132389ms\"\nI0617 01:04:41.848680       1 service.go:306] Service ephemeral-9020-469/csi-hostpathplugin updated: 0 ports\nI0617 01:04:42.149317       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-provisioner updated: 0 ports\nI0617 01:04:42.447568       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-resizer updated: 0 ports\nI0617 01:04:42.447624       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpathplugin:dummy\"\nI0617 01:04:42.447639       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpath-provisioner:dummy\"\nI0617 01:04:42.447648       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpath-resizer:dummy\"\nI0617 01:04:42.447764       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:42.489802       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.160316ms\"\nI0617 01:04:42.743866       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:04:43.489940       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpath-snapshotter:dummy\"\nI0617 01:04:43.490088       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:43.530685       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.744832ms\"\nI0617 01:04:44.453942       1 service.go:306] Service dns-1609/test-service-2 updated: 0 ports\nI0617 01:04:44.453984       1 service.go:446] Removing service port \"dns-1609/test-service-2:http\"\nI0617 01:04:44.454077       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:44.489288       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.288783ms\"\nI0617 01:04:45.489505       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:45.521141       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.716702ms\"\nI0617 01:04:53.554927       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:53.589262       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.392611ms\"\nI0617 01:04:53.589462       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:53.619805       1 service.go:306] Service services-8645/sourceip-test updated: 1 ports\nI0617 01:04:53.621925       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.621661ms\"\nI0617 01:04:53.746254       1 service.go:306] Service services-3389/service-headless-toggled updated: 0 ports\nI0617 01:04:54.622238       1 service.go:421] Adding new service port \"services-8645/sourceip-test\" at 100.68.52.235:8080/TCP\nI0617 01:04:54.622301       1 service.go:446] Removing service port \"services-3389/service-headless-toggled\"\nI0617 01:04:54.622380       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:54.656023       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.827667ms\"\nI0617 01:04:59.564719       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:59.597370       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.716827ms\"\nI0617 01:05:06.648605       1 service.go:306] Service webhook-6103/e2e-test-webhook updated: 1 ports\nI0617 01:05:06.648659       1 service.go:421] Adding new service port \"webhook-6103/e2e-test-webhook\" at 100.66.10.109:8443/TCP\nI0617 01:05:06.648733       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:06.703214       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.532203ms\"\nI0617 01:05:06.703398       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:06.736178       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.859987ms\"\nI0617 01:05:11.201066       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:11.268237       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.225803ms\"\nI0617 01:05:11.357856       1 service.go:306] Service services-8645/sourceip-test updated: 0 ports\nI0617 01:05:11.357893       1 service.go:446] Removing service port \"services-8645/sourceip-test\"\nI0617 01:05:11.357962       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:11.405518       1 service.go:306] Service webhook-9727/e2e-test-webhook updated: 1 ports\nI0617 01:05:11.408599       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.609704ms\"\nI0617 01:05:11.875659       1 service.go:306] Service webhook-6103/e2e-test-webhook updated: 0 ports\nI0617 01:05:12.408773       1 service.go:421] Adding new service port \"webhook-9727/e2e-test-webhook\" at 100.65.56.36:8443/TCP\nI0617 01:05:12.408815       1 service.go:446] Removing service port \"webhook-6103/e2e-test-webhook\"\nI0617 01:05:12.408922       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:12.440228       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.485814ms\"\nI0617 01:05:13.234779       1 service.go:306] Service webhook-8819/e2e-test-webhook updated: 1 ports\nI0617 01:05:13.234822       1 service.go:421] Adding new service port \"webhook-8819/e2e-test-webhook\" at 100.71.33.2:8443/TCP\nI0617 01:05:13.234892       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:13.270784       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.958112ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-55-34.sa-east-1.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-60-41.sa-east-1.compute.internal ====\nI0617 00:48:59.240333       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0617 00:48:59.240716       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0617 00:48:59.240730       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0617 00:48:59.240739       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0617 00:48:59.240746       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0617 00:48:59.240751       1 flags.go:59] FLAG: --cleanup=\"false\"\nI0617 00:48:59.240763       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0617 00:48:59.240770       1 flags.go:59] FLAG: --config=\"\"\nI0617 00:48:59.240774       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0617 00:48:59.240782       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0617 00:48:59.240788       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0617 00:48:59.240793       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0617 00:48:59.240798       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0617 00:48:59.240807       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0617 00:48:59.240813       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0617 00:48:59.240821       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0617 00:48:59.240827       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0617 00:48:59.240832       1 flags.go:59] FLAG: --help=\"false\"\nI0617 00:48:59.240837       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-60-41.sa-east-1.compute.internal\"\nI0617 00:48:59.240848       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0617 00:48:59.240853       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0617 00:48:59.240858       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0617 00:48:59.240863       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0617 00:48:59.240923       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0617 00:48:59.240927       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0617 00:48:59.240932       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0617 00:48:59.240937       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0617 00:48:59.240947       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0617 00:48:59.240951       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0617 00:48:59.240959       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0617 00:48:59.240964       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0617 00:48:59.240968       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0617 00:48:59.240974       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0617 00:48:59.240981       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0617 00:48:59.240991       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0617 00:48:59.240999       1 flags.go:59] FLAG: --log-dir=\"\"\nI0617 00:48:59.241004       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0617 00:48:59.241010       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0617 00:48:59.241014       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0617 00:48:59.241019       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0617 00:48:59.241029       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0617 00:48:59.241035       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0617 00:48:59.241040       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-bf5376b553-82074.test-cncf-aws.k8s.io\"\nI0617 00:48:59.241046       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0617 00:48:59.241052       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0617 00:48:59.241057       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0617 00:48:59.241065       1 flags.go:59] FLAG: --one-output=\"false\"\nI0617 00:48:59.241069       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0617 00:48:59.241082       1 flags.go:59] FLAG: --profiling=\"false\"\nI0617 00:48:59.241086       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0617 00:48:59.241093       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0617 00:48:59.241099       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0617 00:48:59.241106       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0617 00:48:59.241111       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0617 00:48:59.241116       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0617 00:48:59.241125       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0617 00:48:59.241129       1 flags.go:59] FLAG: --v=\"2\"\nI0617 00:48:59.241134       1 flags.go:59] FLAG: --version=\"false\"\nI0617 00:48:59.241142       1 flags.go:59] FLAG: --vmodule=\"\"\nI0617 00:48:59.241146       1 flags.go:59] FLAG: --write-config-to=\"\"\nW0617 00:48:59.241158       1 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0617 00:48:59.241476       1 feature_gate.go:243] feature gates: &{map[]}\nI0617 00:48:59.241728       1 feature_gate.go:243] feature gates: &{map[]}\nI0617 00:48:59.276512       1 node.go:172] Successfully retrieved node IP: 172.20.60.41\nI0617 00:48:59.276544       1 server_others.go:140] Detected node IP 172.20.60.41\nW0617 00:48:59.276705       1 server_others.go:598] Unknown proxy mode \"\", assuming iptables proxy\nI0617 00:48:59.276847       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI0617 00:48:59.305030       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI0617 00:48:59.305063       1 server_others.go:212] Using iptables Proxier.\nI0617 00:48:59.305077       1 server_others.go:219] creating dualStackProxier for iptables.\nW0617 00:48:59.305095       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI0617 00:48:59.305183       1 utils.go:375] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0617 00:48:59.305287       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0617 00:48:59.305374       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0617 00:48:59.305510       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv4\nI0617 00:48:59.305621       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0617 00:48:59.305650       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0617 00:48:59.305667       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv6\nI0617 00:48:59.305812       1 server.go:643] Version: v1.21.2\nI0617 00:48:59.306842       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI0617 00:48:59.306993       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0617 00:48:59.307402       1 mount_linux.go:192] Detected OS without systemd\nI0617 00:48:59.307688       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI0617 00:48:59.326132       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0617 00:48:59.326201       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0617 00:48:59.326779       1 config.go:315] Starting service config controller\nI0617 00:48:59.326891       1 shared_informer.go:240] Waiting for caches to sync for service config\nI0617 00:48:59.327007       1 config.go:224] Starting endpoint slice config controller\nI0617 00:48:59.327112       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nW0617 00:48:59.328317       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0617 00:48:59.329995       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI0617 00:48:59.330276       1 service.go:306] Service kube-system/kube-dns updated: 3 ports\nI0617 00:48:59.330320       1 service.go:306] Service default/kubernetes updated: 1 ports\nI0617 00:48:59.427832       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0617 00:48:59.427832       1 shared_informer.go:247] Caches are synced for service config \nI0617 00:48:59.427906       1 service.go:421] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0617 00:48:59.427930       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0617 00:48:59.427942       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0617 00:48:59.427987       1 service.go:421] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0617 00:48:59.428061       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:48:59.525497       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"97.579897ms\"\nI0617 00:48:59.525606       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0617 00:48:59.525636       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:48:59.555028       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"29.396814ms\"\nI0617 00:48:59.555065       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:48:59.585813       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.751016ms\"\nI0617 00:49:14.133499       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0617 00:49:14.133529       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:49:14.168056       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.649036ms\"\nI0617 00:49:14.168247       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:49:14.200041       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.945305ms\"\nI0617 00:49:24.863285       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:49:24.895450       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.149416ms\"\nI0617 00:49:24.895560       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:49:24.929838       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.34006ms\"\nI0617 00:52:00.214149       1 service.go:306] Service services-361/multi-endpoint-test updated: 2 ports\nI0617 00:52:00.214238       1 service.go:421] Adding new service port \"services-361/multi-endpoint-test:portname1\" at 100.67.208.84:80/TCP\nI0617 00:52:00.214259       1 service.go:421] Adding new service port \"services-361/multi-endpoint-test:portname2\" at 100.67.208.84:81/TCP\nI0617 00:52:00.214295       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:00.250243       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.037481ms\"\nI0617 00:52:00.250401       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:00.283745       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.457917ms\"\nI0617 00:52:00.349473       1 service.go:306] Service services-7632/test-service-x8cf6 updated: 1 ports\nI0617 00:52:00.795305       1 service.go:306] Service services-7632/test-service-x8cf6 updated: 1 ports\nI0617 00:52:01.284256       1 service.go:421] Adding new service port \"services-7632/test-service-x8cf6:http\" at 100.67.195.237:80/TCP\nI0617 00:52:01.284312       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:01.317209       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.97106ms\"\nI0617 00:52:01.894891       1 service.go:306] Service services-7632/test-service-x8cf6 updated: 0 ports\nI0617 00:52:02.317498       1 service.go:446] Removing service port \"services-7632/test-service-x8cf6:http\"\nI0617 00:52:02.317564       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:02.360093       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.594142ms\"\nI0617 00:52:05.302627       1 service.go:306] Service provisioning-3992-1428/csi-hostpath-attacher updated: 1 ports\nI0617 00:52:05.302690       1 service.go:421] Adding new service port \"provisioning-3992-1428/csi-hostpath-attacher:dummy\" at 100.71.155.84:12345/TCP\nI0617 00:52:05.302728       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:05.333676       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.98119ms\"\nI0617 00:52:05.333729       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:05.363765       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.046978ms\"\nI0617 00:52:05.526468       1 service.go:306] Service volume-expand-5117-840/csi-hostpath-attacher updated: 1 ports\nI0617 00:52:05.748841       1 service.go:306] Service provisioning-3992-1428/csi-hostpathplugin updated: 1 ports\nI0617 00:52:05.978980       1 service.go:306] Service volume-expand-5117-840/csi-hostpathplugin updated: 1 ports\nI0617 00:52:06.047799       1 service.go:306] Service provisioning-3992-1428/csi-hostpath-provisioner updated: 1 ports\nI0617 00:52:06.279431       1 service.go:306] Service volume-expand-5117-840/csi-hostpath-provisioner updated: 1 ports\nI0617 00:52:06.343913       1 service.go:306] Service provisioning-3992-1428/csi-hostpath-resizer updated: 1 ports\nI0617 00:52:06.343961       1 service.go:421] Adding new service port \"provisioning-3992-1428/csi-hostpath-resizer:dummy\" at 100.66.119.133:12345/TCP\nI0617 00:52:06.343980       1 service.go:421] Adding new service port \"volume-expand-5117-840/csi-hostpath-attacher:dummy\" at 100.66.132.111:12345/TCP\nI0617 00:52:06.344014       1 service.go:421] Adding new service port \"provisioning-3992-1428/csi-hostpathplugin:dummy\" at 100.64.165.184:12345/TCP\nI0617 00:52:06.344024       1 service.go:421] Adding new service port \"volume-expand-5117-840/csi-hostpathplugin:dummy\" at 100.67.160.48:12345/TCP\nI0617 00:52:06.344034       1 service.go:421] Adding new service port \"provisioning-3992-1428/csi-hostpath-provisioner:dummy\" at 100.69.225.221:12345/TCP\nI0617 00:52:06.344043       1 service.go:421] Adding new service port \"volume-expand-5117-840/csi-hostpath-provisioner:dummy\" at 100.65.81.98:12345/TCP\nI0617 00:52:06.344086       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:06.396420       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.455376ms\"\nI0617 00:52:06.573728       1 service.go:306] Service volume-expand-5117-840/csi-hostpath-resizer updated: 1 ports\nI0617 00:52:06.637624       1 service.go:306] Service provisioning-3992-1428/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:52:06.866326       1 service.go:306] Service volume-expand-5117-840/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:52:07.397295       1 service.go:421] Adding new service port \"volume-expand-5117-840/csi-hostpath-resizer:dummy\" at 100.65.222.185:12345/TCP\nI0617 00:52:07.397349       1 service.go:421] Adding new service port \"provisioning-3992-1428/csi-hostpath-snapshotter:dummy\" at 100.66.112.102:12345/TCP\nI0617 00:52:07.397361       1 service.go:421] Adding new service port \"volume-expand-5117-840/csi-hostpath-snapshotter:dummy\" at 100.69.131.114:12345/TCP\nI0617 00:52:07.397405       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:07.438317       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.039142ms\"\nI0617 00:52:17.596842       1 service.go:306] Service kubectl-9461/agnhost-primary updated: 1 ports\nI0617 00:52:17.596890       1 service.go:421] Adding new service port \"kubectl-9461/agnhost-primary\" at 100.68.116.152:6379/TCP\nI0617 00:52:17.596927       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:17.629523       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.629947ms\"\nI0617 00:52:17.629591       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:17.662624       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.062717ms\"\nI0617 00:52:18.662823       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:18.694390       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.631383ms\"\nI0617 00:52:19.378172       1 service.go:306] Service webhook-3509/e2e-test-webhook updated: 1 ports\nI0617 00:52:19.695251       1 service.go:421] Adding new service port \"webhook-3509/e2e-test-webhook\" at 100.70.46.182:8443/TCP\nI0617 00:52:19.695380       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:19.734095       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.884223ms\"\nI0617 00:52:21.098067       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:21.130137       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.111331ms\"\nI0617 00:52:21.694539       1 service.go:306] Service webhook-3509/e2e-test-webhook updated: 0 ports\nI0617 00:52:21.694577       1 service.go:446] Removing service port \"webhook-3509/e2e-test-webhook\"\nI0617 00:52:21.694628       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:21.740678       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.074634ms\"\nI0617 00:52:22.740909       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:22.772023       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.165068ms\"\nI0617 00:52:23.772377       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:23.813119       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.801909ms\"\nI0617 00:52:24.628362       1 service.go:306] Service services-361/multi-endpoint-test updated: 0 ports\nI0617 00:52:24.628401       1 service.go:446] Removing service port \"services-361/multi-endpoint-test:portname1\"\nI0617 00:52:24.628414       1 service.go:446] Removing service port \"services-361/multi-endpoint-test:portname2\"\nI0617 00:52:24.628469       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:24.671404       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.984668ms\"\nI0617 00:52:25.019446       1 service.go:306] Service ephemeral-800-202/csi-hostpath-attacher updated: 1 ports\nI0617 00:52:25.457732       1 service.go:306] Service ephemeral-800-202/csi-hostpathplugin updated: 1 ports\nI0617 00:52:25.671950       1 service.go:421] Adding new service port \"ephemeral-800-202/csi-hostpath-attacher:dummy\" at 100.64.142.245:12345/TCP\nI0617 00:52:25.671985       1 service.go:421] Adding new service port \"ephemeral-800-202/csi-hostpathplugin:dummy\" at 100.70.121.46:12345/TCP\nI0617 00:52:25.672043       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:25.705260       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.331451ms\"\nI0617 00:52:25.758660       1 service.go:306] Service ephemeral-800-202/csi-hostpath-provisioner updated: 1 ports\nI0617 00:52:26.064633       1 service.go:306] Service ephemeral-800-202/csi-hostpath-resizer updated: 1 ports\nI0617 00:52:26.359907       1 service.go:306] Service ephemeral-800-202/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:52:26.578069       1 service.go:306] Service services-4362/hairpin-test updated: 1 ports\nI0617 00:52:26.705549       1 service.go:421] Adding new service port \"services-4362/hairpin-test\" at 100.65.12.76:8080/TCP\nI0617 00:52:26.705608       1 service.go:421] Adding new service port \"ephemeral-800-202/csi-hostpath-provisioner:dummy\" at 100.64.98.255:12345/TCP\nI0617 00:52:26.705620       1 service.go:421] Adding new service port \"ephemeral-800-202/csi-hostpath-resizer:dummy\" at 100.65.82.79:12345/TCP\nI0617 00:52:26.705637       1 service.go:421] Adding new service port \"ephemeral-800-202/csi-hostpath-snapshotter:dummy\" at 100.68.212.171:12345/TCP\nI0617 00:52:26.705694       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:26.750361       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.82304ms\"\nI0617 00:52:28.373576       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:28.406379       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.824281ms\"\nI0617 00:52:29.759323       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:29.801138       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.85994ms\"\nI0617 00:52:29.833172       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:29.884899       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.750763ms\"\nI0617 00:52:29.928701       1 service.go:306] Service kubectl-9461/agnhost-primary updated: 0 ports\nI0617 00:52:30.885051       1 service.go:446] Removing service port \"kubectl-9461/agnhost-primary\"\nI0617 00:52:30.885120       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:30.921824       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.77493ms\"\nI0617 00:52:31.297465       1 service.go:306] Service crd-webhook-8985/e2e-test-crd-conversion-webhook updated: 1 ports\nI0617 00:52:31.923255       1 service.go:421] Adding new service port \"crd-webhook-8985/e2e-test-crd-conversion-webhook\" at 100.66.249.111:9443/TCP\nI0617 00:52:31.923345       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:32.027530       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"104.287315ms\"\nI0617 00:52:33.028538       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:33.059550       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.074068ms\"\nI0617 00:52:34.060181       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:34.103188       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.080101ms\"\nI0617 00:52:36.641968       1 service.go:306] Service crd-webhook-8985/e2e-test-crd-conversion-webhook updated: 0 ports\nI0617 00:52:36.642006       1 service.go:446] Removing service port \"crd-webhook-8985/e2e-test-crd-conversion-webhook\"\nI0617 00:52:36.642062       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:36.702033       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.008623ms\"\nI0617 00:52:36.702113       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:36.741230       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.152243ms\"\nI0617 00:52:37.679423       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:37.716984       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.592552ms\"\nI0617 00:52:39.013469       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:39.062513       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.088928ms\"\nI0617 00:52:39.140133       1 service.go:306] Service services-4362/hairpin-test updated: 0 ports\nI0617 00:52:40.062985       1 service.go:446] Removing service port \"services-4362/hairpin-test\"\nI0617 00:52:40.063062       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:40.112028       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.038027ms\"\nI0617 00:52:41.665168       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:41.696763       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.633285ms\"\nI0617 00:52:42.784809       1 service.go:306] Service ephemeral-6714-2630/csi-hostpath-attacher updated: 1 ports\nI0617 00:52:42.784855       1 service.go:421] Adding new service port \"ephemeral-6714-2630/csi-hostpath-attacher:dummy\" at 100.70.5.83:12345/TCP\nI0617 00:52:42.784907       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:42.846364       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.500254ms\"\nI0617 00:52:42.846433       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:42.906833       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.417825ms\"\nI0617 00:52:43.224570       1 service.go:306] Service ephemeral-6714-2630/csi-hostpathplugin updated: 1 ports\nI0617 00:52:43.532719       1 service.go:306] Service ephemeral-6714-2630/csi-hostpath-provisioner updated: 1 ports\nI0617 00:52:43.664838       1 service.go:306] Service provisioning-8375-1648/csi-hostpath-attacher updated: 1 ports\nI0617 00:52:43.872010       1 service.go:421] Adding new service port \"ephemeral-6714-2630/csi-hostpath-provisioner:dummy\" at 100.66.173.16:12345/TCP\nI0617 00:52:43.872038       1 service.go:421] Adding new service port \"provisioning-8375-1648/csi-hostpath-attacher:dummy\" at 100.64.191.177:12345/TCP\nI0617 00:52:43.872052       1 service.go:421] Adding new service port \"ephemeral-6714-2630/csi-hostpathplugin:dummy\" at 100.70.161.13:12345/TCP\nI0617 00:52:43.872117       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:43.891752       1 service.go:306] Service ephemeral-6714-2630/csi-hostpath-resizer updated: 1 ports\nI0617 00:52:43.911635       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.630884ms\"\nI0617 00:52:44.158976       1 service.go:306] Service provisioning-8375-1648/csi-hostpathplugin updated: 1 ports\nI0617 00:52:44.227044       1 service.go:306] Service ephemeral-6714-2630/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:52:44.457313       1 service.go:306] Service provisioning-8375-1648/csi-hostpath-provisioner updated: 1 ports\nI0617 00:52:44.764793       1 service.go:306] Service provisioning-8375-1648/csi-hostpath-resizer updated: 1 ports\nI0617 00:52:44.912491       1 service.go:421] Adding new service port \"ephemeral-6714-2630/csi-hostpath-resizer:dummy\" at 100.66.87.238:12345/TCP\nI0617 00:52:44.912522       1 service.go:421] Adding new service port \"provisioning-8375-1648/csi-hostpathplugin:dummy\" at 100.69.149.240:12345/TCP\nI0617 00:52:44.912534       1 service.go:421] Adding new service port \"ephemeral-6714-2630/csi-hostpath-snapshotter:dummy\" at 100.68.53.141:12345/TCP\nI0617 00:52:44.912547       1 service.go:421] Adding new service port \"provisioning-8375-1648/csi-hostpath-provisioner:dummy\" at 100.71.12.23:12345/TCP\nI0617 00:52:44.912565       1 service.go:421] Adding new service port \"provisioning-8375-1648/csi-hostpath-resizer:dummy\" at 100.68.189.54:12345/TCP\nI0617 00:52:44.912628       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:44.952580       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.102606ms\"\nI0617 00:52:45.059675       1 service.go:306] Service provisioning-8375-1648/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:52:45.952820       1 service.go:421] Adding new service port \"provisioning-8375-1648/csi-hostpath-snapshotter:dummy\" at 100.68.51.170:12345/TCP\nI0617 00:52:45.952904       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:45.985537       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.750601ms\"\nI0617 00:52:47.556193       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:47.631914       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.771094ms\"\nI0617 00:52:49.497870       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:49.543893       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.075759ms\"\nI0617 00:52:49.961071       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:50.009533       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.506845ms\"\nI0617 00:52:51.009881       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:51.048657       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.850616ms\"\nI0617 00:52:53.595039       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:53.628260       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.272968ms\"\nI0617 00:52:54.540476       1 service.go:306] Service volume-expand-5043-6975/csi-hostpath-attacher updated: 1 ports\nI0617 00:52:54.540529       1 service.go:421] Adding new service port \"volume-expand-5043-6975/csi-hostpath-attacher:dummy\" at 100.69.208.97:12345/TCP\nI0617 00:52:54.540581       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:54.574636       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.096504ms\"\nI0617 00:52:54.985959       1 service.go:306] Service volume-expand-5043-6975/csi-hostpathplugin updated: 1 ports\nI0617 00:52:54.986008       1 service.go:421] Adding new service port \"volume-expand-5043-6975/csi-hostpathplugin:dummy\" at 100.64.11.183:12345/TCP\nI0617 00:52:54.986066       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:55.019989       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.972741ms\"\nI0617 00:52:55.281568       1 service.go:306] Service volume-expand-5043-6975/csi-hostpath-provisioner updated: 1 ports\nI0617 00:52:55.580031       1 service.go:306] Service volume-expand-5043-6975/csi-hostpath-resizer updated: 1 ports\nI0617 00:52:55.874342       1 service.go:306] Service volume-expand-5043-6975/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:52:55.874391       1 service.go:421] Adding new service port \"volume-expand-5043-6975/csi-hostpath-provisioner:dummy\" at 100.65.117.220:12345/TCP\nI0617 00:52:55.874408       1 service.go:421] Adding new service port \"volume-expand-5043-6975/csi-hostpath-resizer:dummy\" at 100.68.91.170:12345/TCP\nI0617 00:52:55.874419       1 service.go:421] Adding new service port \"volume-expand-5043-6975/csi-hostpath-snapshotter:dummy\" at 100.65.252.70:12345/TCP\nI0617 00:52:55.874477       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:55.919192       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.789486ms\"\nI0617 00:52:56.643183       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:56.679746       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.599663ms\"\nI0617 00:52:57.851031       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:57.854448       1 service.go:306] Service services-7297/service-proxy-toggled updated: 1 ports\nI0617 00:52:57.889055       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.055289ms\"\nI0617 00:52:58.859051       1 service.go:421] Adding new service port \"services-7297/service-proxy-toggled\" at 100.65.190.1:80/TCP\nI0617 00:52:58.859168       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:58.925216       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.168128ms\"\nI0617 00:52:59.822903       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:52:59.856590       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.741647ms\"\nI0617 00:53:00.856942       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:00.892190       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.353833ms\"\nI0617 00:53:02.922432       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:02.970823       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.436739ms\"\nI0617 00:53:06.085554       1 service.go:306] Service provisioning-6733-7010/csi-hostpath-attacher updated: 1 ports\nI0617 00:53:06.085600       1 service.go:421] Adding new service port \"provisioning-6733-7010/csi-hostpath-attacher:dummy\" at 100.68.85.123:12345/TCP\nI0617 00:53:06.085680       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:06.139780       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.175956ms\"\nI0617 00:53:06.139863       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:06.197872       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.048865ms\"\nI0617 00:53:06.546505       1 service.go:306] Service provisioning-6733-7010/csi-hostpathplugin updated: 1 ports\nI0617 00:53:06.858538       1 service.go:306] Service provisioning-6733-7010/csi-hostpath-provisioner updated: 1 ports\nI0617 00:53:07.153623       1 service.go:306] Service provisioning-6733-7010/csi-hostpath-resizer updated: 1 ports\nI0617 00:53:07.153693       1 service.go:421] Adding new service port \"provisioning-6733-7010/csi-hostpathplugin:dummy\" at 100.68.26.57:12345/TCP\nI0617 00:53:07.153712       1 service.go:421] Adding new service port \"provisioning-6733-7010/csi-hostpath-provisioner:dummy\" at 100.68.141.102:12345/TCP\nI0617 00:53:07.153739       1 service.go:421] Adding new service port \"provisioning-6733-7010/csi-hostpath-resizer:dummy\" at 100.68.218.19:12345/TCP\nI0617 00:53:07.153801       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:07.196410       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.716431ms\"\nI0617 00:53:07.448840       1 service.go:306] Service provisioning-6733-7010/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:53:08.197275       1 service.go:421] Adding new service port \"provisioning-6733-7010/csi-hostpath-snapshotter:dummy\" at 100.64.69.40:12345/TCP\nI0617 00:53:08.197388       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:08.239448       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.193833ms\"\nI0617 00:53:09.169725       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:09.209348       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.670278ms\"\nI0617 00:53:10.209886       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:10.245481       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.689478ms\"\nI0617 00:53:11.182171       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:11.218144       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.038338ms\"\nI0617 00:53:16.082858       1 service.go:306] Service webhook-8462/e2e-test-webhook updated: 1 ports\nI0617 00:53:16.082900       1 service.go:421] Adding new service port \"webhook-8462/e2e-test-webhook\" at 100.70.118.84:8443/TCP\nI0617 00:53:16.082972       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:16.119545       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.642011ms\"\nI0617 00:53:16.119771       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:16.154420       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.81581ms\"\nI0617 00:53:16.158329       1 service.go:306] Service services-7297/service-proxy-toggled updated: 0 ports\nI0617 00:53:17.155451       1 service.go:446] Removing service port \"services-7297/service-proxy-toggled\"\nI0617 00:53:17.155584       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:17.202320       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.866259ms\"\nI0617 00:53:18.653947       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:18.695699       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.829626ms\"\nI0617 00:53:19.696916       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:19.741102       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.270127ms\"\nI0617 00:53:20.578450       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:20.615821       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.448132ms\"\nI0617 00:53:21.616155       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:21.660383       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.346351ms\"\nI0617 00:53:22.626132       1 service.go:306] Service services-7297/service-proxy-toggled updated: 1 ports\nI0617 00:53:22.626177       1 service.go:421] Adding new service port \"services-7297/service-proxy-toggled\" at 100.65.190.1:80/TCP\nI0617 00:53:22.626278       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:22.663703       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.522045ms\"\nI0617 00:53:23.663912       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:23.709323       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.498554ms\"\nI0617 00:53:24.709667       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:24.751422       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.84047ms\"\nI0617 00:53:25.805073       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:25.841458       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.451377ms\"\nI0617 00:53:27.846376       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:27.882529       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.198828ms\"\nI0617 00:53:30.907047       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:30.975013       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.032741ms\"\nI0617 00:53:32.603195       1 service.go:306] Service webhook-8462/e2e-test-webhook updated: 0 ports\nI0617 00:53:32.603238       1 service.go:446] Removing service port \"webhook-8462/e2e-test-webhook\"\nI0617 00:53:32.603309       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:32.639847       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.598906ms\"\nI0617 00:53:32.640066       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:32.677720       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.834891ms\"\nI0617 00:53:38.151211       1 service.go:306] Service webhook-2903/e2e-test-webhook updated: 1 ports\nI0617 00:53:38.151265       1 service.go:421] Adding new service port \"webhook-2903/e2e-test-webhook\" at 100.68.33.71:8443/TCP\nI0617 00:53:38.151332       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:38.190128       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.859311ms\"\nI0617 00:53:38.190336       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:38.234898       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.71772ms\"\nI0617 00:53:41.344342       1 service.go:306] Service webhook-2903/e2e-test-webhook updated: 0 ports\nI0617 00:53:41.344386       1 service.go:446] Removing service port \"webhook-2903/e2e-test-webhook\"\nI0617 00:53:41.344458       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:41.381625       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.225247ms\"\nI0617 00:53:41.381744       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:41.421162       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.491367ms\"\nI0617 00:53:42.541336       1 service.go:306] Service volume-expand-543-4862/csi-hostpath-attacher updated: 1 ports\nI0617 00:53:42.541381       1 service.go:421] Adding new service port \"volume-expand-543-4862/csi-hostpath-attacher:dummy\" at 100.69.67.151:12345/TCP\nI0617 00:53:42.541449       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:42.592746       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.333419ms\"\nI0617 00:53:42.993360       1 service.go:306] Service volume-expand-543-4862/csi-hostpathplugin updated: 1 ports\nI0617 00:53:43.058125       1 service.go:306] Service services-7297/service-proxy-toggled updated: 0 ports\nI0617 00:53:43.291769       1 service.go:306] Service volume-expand-543-4862/csi-hostpath-provisioner updated: 1 ports\nI0617 00:53:43.586853       1 service.go:306] Service volume-expand-543-4862/csi-hostpath-resizer updated: 1 ports\nI0617 00:53:43.586901       1 service.go:421] Adding new service port \"volume-expand-543-4862/csi-hostpath-resizer:dummy\" at 100.70.110.131:12345/TCP\nI0617 00:53:43.586919       1 service.go:421] Adding new service port \"volume-expand-543-4862/csi-hostpathplugin:dummy\" at 100.64.175.239:12345/TCP\nI0617 00:53:43.586930       1 service.go:446] Removing service port \"services-7297/service-proxy-toggled\"\nI0617 00:53:43.586942       1 service.go:421] Adding new service port \"volume-expand-543-4862/csi-hostpath-provisioner:dummy\" at 100.69.18.141:12345/TCP\nI0617 00:53:43.587027       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:43.635419       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.496971ms\"\nI0617 00:53:43.883719       1 service.go:306] Service volume-expand-543-4862/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:53:44.635656       1 service.go:421] Adding new service port \"volume-expand-543-4862/csi-hostpath-snapshotter:dummy\" at 100.66.146.65:12345/TCP\nI0617 00:53:44.635749       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:44.672627       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.981992ms\"\nI0617 00:53:45.672849       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:45.716378       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.51037ms\"\nI0617 00:53:45.843218       1 service.go:306] Service services-752/endpoint-test2 updated: 1 ports\nI0617 00:53:46.687066       1 service.go:421] Adding new service port \"services-752/endpoint-test2\" at 100.66.216.13:80/TCP\nI0617 00:53:46.687176       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:46.727502       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.433281ms\"\nI0617 00:53:47.445569       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:47.482670       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.149994ms\"\nI0617 00:53:48.455785       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:48.493286       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.550172ms\"\nI0617 00:53:49.494050       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:49.535418       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.431822ms\"\nI0617 00:53:50.535707       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:50.602912       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.336997ms\"\nI0617 00:53:51.393161       1 service.go:306] Service volume-expand-5043-6975/csi-hostpath-attacher updated: 0 ports\nI0617 00:53:51.393205       1 service.go:446] Removing service port \"volume-expand-5043-6975/csi-hostpath-attacher:dummy\"\nI0617 00:53:51.393274       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:51.440697       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.478069ms\"\nI0617 00:53:51.837673       1 service.go:306] Service volume-expand-5043-6975/csi-hostpathplugin updated: 0 ports\nI0617 00:53:52.136691       1 service.go:306] Service volume-expand-5043-6975/csi-hostpath-provisioner updated: 0 ports\nI0617 00:53:52.441086       1 service.go:446] Removing service port \"volume-expand-5043-6975/csi-hostpath-provisioner:dummy\"\nI0617 00:53:52.441120       1 service.go:446] Removing service port \"volume-expand-5043-6975/csi-hostpathplugin:dummy\"\nI0617 00:53:52.441220       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:52.443740       1 service.go:306] Service volume-expand-5043-6975/csi-hostpath-resizer updated: 0 ports\nI0617 00:53:52.494063       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.977174ms\"\nI0617 00:53:52.752732       1 service.go:306] Service volume-expand-5043-6975/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:53:53.494337       1 service.go:446] Removing service port \"volume-expand-5043-6975/csi-hostpath-resizer:dummy\"\nI0617 00:53:53.494383       1 service.go:446] Removing service port \"volume-expand-5043-6975/csi-hostpath-snapshotter:dummy\"\nI0617 00:53:53.494503       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:53.531530       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.210372ms\"\nI0617 00:53:54.937009       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:55.013154       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.221457ms\"\nI0617 00:53:55.670427       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:55.723362       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.980538ms\"\nI0617 00:53:56.247926       1 service.go:306] Service services-752/endpoint-test2 updated: 0 ports\nI0617 00:53:56.724032       1 service.go:446] Removing service port \"services-752/endpoint-test2\"\nI0617 00:53:56.724144       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:56.761118       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.089626ms\"\nI0617 00:53:57.365092       1 service.go:306] Service provisioning-3992-1428/csi-hostpath-attacher updated: 0 ports\nI0617 00:53:57.365136       1 service.go:446] Removing service port \"provisioning-3992-1428/csi-hostpath-attacher:dummy\"\nI0617 00:53:57.365206       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:57.421495       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.339415ms\"\nI0617 00:53:57.813739       1 service.go:306] Service provisioning-3992-1428/csi-hostpathplugin updated: 0 ports\nI0617 00:53:58.111918       1 service.go:306] Service provisioning-3992-1428/csi-hostpath-provisioner updated: 0 ports\nI0617 00:53:58.418910       1 service.go:306] Service provisioning-3992-1428/csi-hostpath-resizer updated: 0 ports\nI0617 00:53:58.418958       1 service.go:446] Removing service port \"provisioning-3992-1428/csi-hostpath-provisioner:dummy\"\nI0617 00:53:58.418973       1 service.go:446] Removing service port \"provisioning-3992-1428/csi-hostpath-resizer:dummy\"\nI0617 00:53:58.418981       1 service.go:446] Removing service port \"provisioning-3992-1428/csi-hostpathplugin:dummy\"\nI0617 00:53:58.419065       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:58.473685       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.713475ms\"\nI0617 00:53:58.726326       1 service.go:306] Service provisioning-3992-1428/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:53:59.474294       1 service.go:446] Removing service port \"provisioning-3992-1428/csi-hostpath-snapshotter:dummy\"\nI0617 00:53:59.474417       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:53:59.509796       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.504103ms\"\nI0617 00:54:02.835219       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:02.883486       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.331855ms\"\nI0617 00:54:03.422765       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:03.471554       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.84803ms\"\nI0617 00:54:04.581554       1 service.go:306] Service volume-expand-5117-840/csi-hostpath-attacher updated: 0 ports\nI0617 00:54:04.581605       1 service.go:446] Removing service port \"volume-expand-5117-840/csi-hostpath-attacher:dummy\"\nI0617 00:54:04.581672       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:04.630162       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.53832ms\"\nI0617 00:54:05.024619       1 service.go:306] Service volume-expand-5117-840/csi-hostpathplugin updated: 0 ports\nI0617 00:54:05.024663       1 service.go:446] Removing service port \"volume-expand-5117-840/csi-hostpathplugin:dummy\"\nI0617 00:54:05.024745       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:05.078380       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.694039ms\"\nI0617 00:54:05.327078       1 service.go:306] Service volume-expand-5117-840/csi-hostpath-provisioner updated: 0 ports\nI0617 00:54:05.623523       1 service.go:306] Service volume-expand-5117-840/csi-hostpath-resizer updated: 0 ports\nI0617 00:54:05.921613       1 service.go:306] Service volume-expand-5117-840/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:54:05.921662       1 service.go:446] Removing service port \"volume-expand-5117-840/csi-hostpath-resizer:dummy\"\nI0617 00:54:05.921679       1 service.go:446] Removing service port \"volume-expand-5117-840/csi-hostpath-snapshotter:dummy\"\nI0617 00:54:05.921688       1 service.go:446] Removing service port \"volume-expand-5117-840/csi-hostpath-provisioner:dummy\"\nI0617 00:54:05.921785       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:05.992051       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.373035ms\"\nI0617 00:54:06.992928       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:07.028037       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.174975ms\"\nI0617 00:54:13.440744       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:13.476303       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.608835ms\"\nI0617 00:54:14.710996       1 service.go:306] Service volume-expand-543-4862/csi-hostpath-attacher updated: 0 ports\nI0617 00:54:14.711037       1 service.go:446] Removing service port \"volume-expand-543-4862/csi-hostpath-attacher:dummy\"\nI0617 00:54:14.711101       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:14.753451       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.393452ms\"\nI0617 00:54:14.753611       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:14.829414       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.916433ms\"\nI0617 00:54:15.168790       1 service.go:306] Service volume-expand-543-4862/csi-hostpathplugin updated: 0 ports\nI0617 00:54:15.302904       1 service.go:306] Service provisioning-6733-7010/csi-hostpath-attacher updated: 0 ports\nI0617 00:54:15.470880       1 service.go:306] Service volume-expand-543-4862/csi-hostpath-provisioner updated: 0 ports\nI0617 00:54:15.753146       1 service.go:306] Service provisioning-6733-7010/csi-hostpathplugin updated: 0 ports\nI0617 00:54:15.753196       1 service.go:446] Removing service port \"volume-expand-543-4862/csi-hostpath-provisioner:dummy\"\nI0617 00:54:15.753209       1 service.go:446] Removing service port \"provisioning-6733-7010/csi-hostpathplugin:dummy\"\nI0617 00:54:15.753218       1 service.go:446] Removing service port \"volume-expand-543-4862/csi-hostpathplugin:dummy\"\nI0617 00:54:15.753226       1 service.go:446] Removing service port \"provisioning-6733-7010/csi-hostpath-attacher:dummy\"\nI0617 00:54:15.753323       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:15.781743       1 service.go:306] Service volume-expand-543-4862/csi-hostpath-resizer updated: 0 ports\nI0617 00:54:15.800432       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.221512ms\"\nI0617 00:54:16.055240       1 service.go:306] Service provisioning-6733-7010/csi-hostpath-provisioner updated: 0 ports\nI0617 00:54:16.096582       1 service.go:306] Service volume-expand-543-4862/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:54:16.356659       1 service.go:306] Service provisioning-6733-7010/csi-hostpath-resizer updated: 0 ports\nI0617 00:54:16.658164       1 service.go:306] Service provisioning-6733-7010/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:54:16.801053       1 service.go:446] Removing service port \"volume-expand-543-4862/csi-hostpath-resizer:dummy\"\nI0617 00:54:16.801097       1 service.go:446] Removing service port \"provisioning-6733-7010/csi-hostpath-provisioner:dummy\"\nI0617 00:54:16.801106       1 service.go:446] Removing service port \"volume-expand-543-4862/csi-hostpath-snapshotter:dummy\"\nI0617 00:54:16.801114       1 service.go:446] Removing service port \"provisioning-6733-7010/csi-hostpath-resizer:dummy\"\nI0617 00:54:16.801121       1 service.go:446] Removing service port \"provisioning-6733-7010/csi-hostpath-snapshotter:dummy\"\nI0617 00:54:16.801242       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:16.847574       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.520822ms\"\nI0617 00:54:22.922233       1 service.go:306] Service provisioning-8375-1648/csi-hostpath-attacher updated: 0 ports\nI0617 00:54:22.922270       1 service.go:446] Removing service port \"provisioning-8375-1648/csi-hostpath-attacher:dummy\"\nI0617 00:54:22.922325       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:22.996484       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.194898ms\"\nI0617 00:54:22.996576       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:23.077585       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"81.051865ms\"\nI0617 00:54:23.373965       1 service.go:306] Service provisioning-8375-1648/csi-hostpathplugin updated: 0 ports\nI0617 00:54:23.673193       1 service.go:306] Service provisioning-8375-1648/csi-hostpath-provisioner updated: 0 ports\nI0617 00:54:23.977490       1 service.go:306] Service provisioning-8375-1648/csi-hostpath-resizer updated: 0 ports\nI0617 00:54:23.977525       1 service.go:446] Removing service port \"provisioning-8375-1648/csi-hostpathplugin:dummy\"\nI0617 00:54:23.977539       1 service.go:446] Removing service port \"provisioning-8375-1648/csi-hostpath-provisioner:dummy\"\nI0617 00:54:23.977547       1 service.go:446] Removing service port \"provisioning-8375-1648/csi-hostpath-resizer:dummy\"\nI0617 00:54:23.977619       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:24.038803       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.26054ms\"\nI0617 00:54:24.280205       1 service.go:306] Service provisioning-8375-1648/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:54:25.039023       1 service.go:446] Removing service port \"provisioning-8375-1648/csi-hostpath-snapshotter:dummy\"\nI0617 00:54:25.039145       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:25.078337       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.316814ms\"\nI0617 00:54:42.285000       1 service.go:306] Service endpointslice-4973/example-int-port updated: 1 ports\nI0617 00:54:42.285043       1 service.go:421] Adding new service port \"endpointslice-4973/example-int-port:example\" at 100.65.245.101:80/TCP\nI0617 00:54:42.286149       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:42.325229       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.180156ms\"\nI0617 00:54:42.325303       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:42.379182       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.893378ms\"\nI0617 00:54:42.435523       1 service.go:306] Service endpointslice-4973/example-named-port updated: 1 ports\nI0617 00:54:42.587810       1 service.go:306] Service endpointslice-4973/example-no-match updated: 1 ports\nI0617 00:54:43.379447       1 service.go:421] Adding new service port \"endpointslice-4973/example-named-port:http\" at 100.69.124.183:80/TCP\nI0617 00:54:43.379479       1 service.go:421] Adding new service port \"endpointslice-4973/example-no-match:example-no-match\" at 100.69.232.235:80/TCP\nI0617 00:54:43.379546       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:43.426616       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.190383ms\"\nI0617 00:54:46.873007       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:46.906102       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.139136ms\"\nI0617 00:54:47.276105       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:47.309914       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.892775ms\"\nI0617 00:54:48.310073       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:48.393486       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.470126ms\"\nI0617 00:54:56.431621       1 service.go:306] Service ephemeral-800-202/csi-hostpath-attacher updated: 0 ports\nI0617 00:54:56.431655       1 service.go:446] Removing service port \"ephemeral-800-202/csi-hostpath-attacher:dummy\"\nI0617 00:54:56.431724       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:56.500170       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.497363ms\"\nI0617 00:54:56.500286       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:56.511220       1 service.go:306] Service provisioning-3575-8837/csi-hostpath-attacher updated: 1 ports\nI0617 00:54:56.552856       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.641252ms\"\nI0617 00:54:56.880621       1 service.go:306] Service ephemeral-800-202/csi-hostpathplugin updated: 0 ports\nI0617 00:54:56.955635       1 service.go:306] Service provisioning-3575-8837/csi-hostpathplugin updated: 1 ports\nI0617 00:54:57.181753       1 service.go:306] Service ephemeral-800-202/csi-hostpath-provisioner updated: 0 ports\nI0617 00:54:57.249511       1 service.go:306] Service provisioning-3575-8837/csi-hostpath-provisioner updated: 1 ports\nI0617 00:54:57.479407       1 service.go:306] Service ephemeral-800-202/csi-hostpath-resizer updated: 0 ports\nI0617 00:54:57.479449       1 service.go:446] Removing service port \"ephemeral-800-202/csi-hostpath-provisioner:dummy\"\nI0617 00:54:57.479473       1 service.go:421] Adding new service port \"provisioning-3575-8837/csi-hostpath-provisioner:dummy\" at 100.67.228.41:12345/TCP\nI0617 00:54:57.479482       1 service.go:446] Removing service port \"ephemeral-800-202/csi-hostpath-resizer:dummy\"\nI0617 00:54:57.479492       1 service.go:421] Adding new service port \"provisioning-3575-8837/csi-hostpath-attacher:dummy\" at 100.70.137.148:12345/TCP\nI0617 00:54:57.479499       1 service.go:446] Removing service port \"ephemeral-800-202/csi-hostpathplugin:dummy\"\nI0617 00:54:57.479509       1 service.go:421] Adding new service port \"provisioning-3575-8837/csi-hostpathplugin:dummy\" at 100.71.105.77:12345/TCP\nI0617 00:54:57.479595       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:57.511793       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.331188ms\"\nI0617 00:54:57.547553       1 service.go:306] Service provisioning-3575-8837/csi-hostpath-resizer updated: 1 ports\nI0617 00:54:57.785323       1 service.go:306] Service ephemeral-800-202/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:54:57.839703       1 service.go:306] Service provisioning-3575-8837/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:54:58.511971       1 service.go:421] Adding new service port \"provisioning-3575-8837/csi-hostpath-resizer:dummy\" at 100.66.191.21:12345/TCP\nI0617 00:54:58.512006       1 service.go:446] Removing service port \"ephemeral-800-202/csi-hostpath-snapshotter:dummy\"\nI0617 00:54:58.512020       1 service.go:421] Adding new service port \"provisioning-3575-8837/csi-hostpath-snapshotter:dummy\" at 100.64.178.246:12345/TCP\nI0617 00:54:58.512125       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:54:58.544683       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.742804ms\"\nI0617 00:55:04.047290       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:04.091805       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.583448ms\"\nI0617 00:55:04.192975       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:04.234600       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.65502ms\"\nI0617 00:55:05.052136       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:05.087950       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.870019ms\"\nI0617 00:55:06.088466       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:06.120727       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.336471ms\"\nI0617 00:55:15.477064       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:15.511034       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.017966ms\"\nI0617 00:55:15.881624       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:15.931407       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.846851ms\"\nI0617 00:55:19.971524       1 service.go:306] Service endpointslice-4973/example-int-port updated: 0 ports\nI0617 00:55:19.971567       1 service.go:446] Removing service port \"endpointslice-4973/example-int-port:example\"\nI0617 00:55:19.971638       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:19.986942       1 service.go:306] Service endpointslice-4973/example-named-port updated: 0 ports\nI0617 00:55:19.998849       1 service.go:306] Service endpointslice-4973/example-no-match updated: 0 ports\nI0617 00:55:20.004848       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.272143ms\"\nI0617 00:55:20.004960       1 service.go:446] Removing service port \"endpointslice-4973/example-named-port:http\"\nI0617 00:55:20.004988       1 service.go:446] Removing service port \"endpointslice-4973/example-no-match:example-no-match\"\nI0617 00:55:20.005129       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:20.044840       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.872325ms\"\nI0617 00:55:20.486041       1 service.go:306] Service kubectl-1215/rm2 updated: 1 ports\nI0617 00:55:21.045085       1 service.go:421] Adding new service port \"kubectl-1215/rm2\" at 100.68.140.54:1234/TCP\nI0617 00:55:21.045193       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:21.079655       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.586903ms\"\nI0617 00:55:21.302222       1 service.go:306] Service volume-242-7148/csi-hostpath-attacher updated: 1 ports\nI0617 00:55:21.750143       1 service.go:306] Service volume-242-7148/csi-hostpathplugin updated: 1 ports\nI0617 00:55:22.054272       1 service.go:306] Service volume-242-7148/csi-hostpath-provisioner updated: 1 ports\nI0617 00:55:22.054323       1 service.go:421] Adding new service port \"volume-242-7148/csi-hostpathplugin:dummy\" at 100.70.74.148:12345/TCP\nI0617 00:55:22.054339       1 service.go:421] Adding new service port \"volume-242-7148/csi-hostpath-provisioner:dummy\" at 100.69.95.196:12345/TCP\nI0617 00:55:22.054351       1 service.go:421] Adding new service port \"volume-242-7148/csi-hostpath-attacher:dummy\" at 100.64.72.91:12345/TCP\nI0617 00:55:22.054421       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:22.106657       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.320924ms\"\nI0617 00:55:22.421698       1 service.go:306] Service volume-242-7148/csi-hostpath-resizer updated: 1 ports\nI0617 00:55:22.725058       1 service.go:306] Service volume-242-7148/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:55:23.106918       1 service.go:421] Adding new service port \"volume-242-7148/csi-hostpath-resizer:dummy\" at 100.66.15.131:12345/TCP\nI0617 00:55:23.106968       1 service.go:421] Adding new service port \"volume-242-7148/csi-hostpath-snapshotter:dummy\" at 100.67.44.180:12345/TCP\nI0617 00:55:23.107057       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:23.140937       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.036179ms\"\nI0617 00:55:23.424382       1 service.go:306] Service webhook-6166/e2e-test-webhook updated: 1 ports\nI0617 00:55:23.614760       1 service.go:306] Service kubectl-1215/rm3 updated: 1 ports\nI0617 00:55:24.141115       1 service.go:421] Adding new service port \"webhook-6166/e2e-test-webhook\" at 100.66.101.195:8443/TCP\nI0617 00:55:24.141145       1 service.go:421] Adding new service port \"kubectl-1215/rm3\" at 100.68.171.65:2345/TCP\nI0617 00:55:24.141250       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:24.200286       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.17279ms\"\nI0617 00:55:25.678472       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:25.715927       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.508443ms\"\nI0617 00:55:26.031878       1 service.go:306] Service webhook-6166/e2e-test-webhook updated: 0 ports\nI0617 00:55:26.031915       1 service.go:446] Removing service port \"webhook-6166/e2e-test-webhook\"\nI0617 00:55:26.031989       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:26.090555       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.620302ms\"\nI0617 00:55:27.091227       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:27.131422       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.252734ms\"\nI0617 00:55:28.474743       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:28.509289       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.599961ms\"\nI0617 00:55:29.275446       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:29.314448       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.047712ms\"\nI0617 00:55:30.282839       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:30.318023       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.316176ms\"\nI0617 00:55:31.426357       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:31.486148       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.863978ms\"\nI0617 00:55:31.552233       1 service.go:306] Service kubectl-1215/rm2 updated: 0 ports\nI0617 00:55:31.563458       1 service.go:306] Service kubectl-1215/rm3 updated: 0 ports\nI0617 00:55:32.487063       1 service.go:446] Removing service port \"kubectl-1215/rm2\"\nI0617 00:55:32.487134       1 service.go:446] Removing service port \"kubectl-1215/rm3\"\nI0617 00:55:32.487267       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:32.530692       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.627329ms\"\nI0617 00:55:33.141616       1 service.go:306] Service ephemeral-6714-2630/csi-hostpath-attacher updated: 0 ports\nI0617 00:55:33.141661       1 service.go:446] Removing service port \"ephemeral-6714-2630/csi-hostpath-attacher:dummy\"\nI0617 00:55:33.141741       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:33.176633       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.961343ms\"\nI0617 00:55:33.591929       1 service.go:306] Service ephemeral-6714-2630/csi-hostpathplugin updated: 0 ports\nI0617 00:55:33.888838       1 service.go:306] Service ephemeral-6714-2630/csi-hostpath-provisioner updated: 0 ports\nI0617 00:55:34.177529       1 service.go:446] Removing service port \"ephemeral-6714-2630/csi-hostpathplugin:dummy\"\nI0617 00:55:34.177560       1 service.go:446] Removing service port \"ephemeral-6714-2630/csi-hostpath-provisioner:dummy\"\nI0617 00:55:34.177651       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:34.185039       1 service.go:306] Service ephemeral-6714-2630/csi-hostpath-resizer updated: 0 ports\nI0617 00:55:34.233313       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.779503ms\"\nI0617 00:55:34.489793       1 service.go:306] Service ephemeral-6714-2630/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:55:35.233574       1 service.go:446] Removing service port \"ephemeral-6714-2630/csi-hostpath-resizer:dummy\"\nI0617 00:55:35.233636       1 service.go:446] Removing service port \"ephemeral-6714-2630/csi-hostpath-snapshotter:dummy\"\nI0617 00:55:35.233739       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:35.273666       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.092948ms\"\nI0617 00:55:36.274115       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:36.308258       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.251612ms\"\nI0617 00:55:45.632832       1 service.go:306] Service endpointslicemirroring-5545/example-custom-endpoints updated: 1 ports\nI0617 00:55:45.632871       1 service.go:421] Adding new service port \"endpointslicemirroring-5545/example-custom-endpoints:example\" at 100.64.69.13:80/TCP\nI0617 00:55:45.632946       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:45.715008       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.131281ms\"\nI0617 00:55:45.781262       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:45.861387       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.178441ms\"\nI0617 00:55:46.862568       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:46.898492       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.009371ms\"\nI0617 00:55:51.869723       1 service.go:306] Service endpointslicemirroring-5545/example-custom-endpoints updated: 0 ports\nI0617 00:55:51.869757       1 service.go:446] Removing service port \"endpointslicemirroring-5545/example-custom-endpoints:example\"\nI0617 00:55:51.869825       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:51.907241       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.471476ms\"\nI0617 00:55:54.910351       1 service.go:306] Service provisioning-3575-8837/csi-hostpath-attacher updated: 0 ports\nI0617 00:55:54.910391       1 service.go:446] Removing service port \"provisioning-3575-8837/csi-hostpath-attacher:dummy\"\nI0617 00:55:54.910610       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:54.946586       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.18361ms\"\nI0617 00:55:54.970286       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:55.004212       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.002342ms\"\nI0617 00:55:55.385070       1 service.go:306] Service provisioning-3575-8837/csi-hostpathplugin updated: 0 ports\nI0617 00:55:55.684020       1 service.go:306] Service provisioning-3575-8837/csi-hostpath-provisioner updated: 0 ports\nI0617 00:55:55.981245       1 service.go:306] Service provisioning-3575-8837/csi-hostpath-resizer updated: 0 ports\nI0617 00:55:55.981285       1 service.go:446] Removing service port \"provisioning-3575-8837/csi-hostpath-resizer:dummy\"\nI0617 00:55:55.981302       1 service.go:446] Removing service port \"provisioning-3575-8837/csi-hostpathplugin:dummy\"\nI0617 00:55:55.981312       1 service.go:446] Removing service port \"provisioning-3575-8837/csi-hostpath-provisioner:dummy\"\nI0617 00:55:55.981388       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:56.035387       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.08625ms\"\nI0617 00:55:56.286517       1 service.go:306] Service provisioning-3575-8837/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:55:57.036104       1 service.go:446] Removing service port \"provisioning-3575-8837/csi-hostpath-snapshotter:dummy\"\nI0617 00:55:57.036333       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:55:57.068863       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.741224ms\"\nI0617 00:56:03.438618       1 service.go:306] Service volume-expand-1143-9758/csi-hostpath-attacher updated: 1 ports\nI0617 00:56:03.438668       1 service.go:421] Adding new service port \"volume-expand-1143-9758/csi-hostpath-attacher:dummy\" at 100.64.17.168:12345/TCP\nI0617 00:56:03.438734       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:03.473446       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.774617ms\"\nI0617 00:56:03.473534       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:03.506241       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.75041ms\"\nI0617 00:56:03.891099       1 service.go:306] Service volume-expand-1143-9758/csi-hostpathplugin updated: 1 ports\nI0617 00:56:04.187184       1 service.go:306] Service volume-expand-1143-9758/csi-hostpath-provisioner updated: 1 ports\nI0617 00:56:04.483501       1 service.go:306] Service volume-expand-1143-9758/csi-hostpath-resizer updated: 1 ports\nI0617 00:56:04.483554       1 service.go:421] Adding new service port \"volume-expand-1143-9758/csi-hostpathplugin:dummy\" at 100.67.212.26:12345/TCP\nI0617 00:56:04.483572       1 service.go:421] Adding new service port \"volume-expand-1143-9758/csi-hostpath-provisioner:dummy\" at 100.64.38.101:12345/TCP\nI0617 00:56:04.483586       1 service.go:421] Adding new service port \"volume-expand-1143-9758/csi-hostpath-resizer:dummy\" at 100.71.57.75:12345/TCP\nI0617 00:56:04.483653       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:04.523035       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.478251ms\"\nI0617 00:56:04.779932       1 service.go:306] Service volume-expand-1143-9758/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:56:05.524154       1 service.go:421] Adding new service port \"volume-expand-1143-9758/csi-hostpath-snapshotter:dummy\" at 100.64.94.175:12345/TCP\nI0617 00:56:05.524257       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:05.568786       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.656927ms\"\nI0617 00:56:06.822767       1 service.go:306] Service webhook-1585/e2e-test-webhook updated: 1 ports\nI0617 00:56:06.822819       1 service.go:421] Adding new service port \"webhook-1585/e2e-test-webhook\" at 100.64.248.49:8443/TCP\nI0617 00:56:06.822877       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:06.859658       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.833703ms\"\nI0617 00:56:07.859972       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:07.894948       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.04583ms\"\nI0617 00:56:09.778943       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:09.820677       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.787861ms\"\nI0617 00:56:10.179050       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:10.212034       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.03922ms\"\nI0617 00:56:11.579506       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:11.612071       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.609191ms\"\nI0617 00:56:12.179054       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:12.224599       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.594312ms\"\nI0617 00:56:13.224916       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:13.257077       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.255985ms\"\nI0617 00:56:20.650063       1 service.go:306] Service webhook-1585/e2e-test-webhook updated: 0 ports\nI0617 00:56:20.650130       1 service.go:446] Removing service port \"webhook-1585/e2e-test-webhook\"\nI0617 00:56:20.650206       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:20.684714       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.577088ms\"\nI0617 00:56:20.684815       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:20.719485       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.729803ms\"\nI0617 00:56:22.531329       1 service.go:306] Service resourcequota-6098/test-service updated: 1 ports\nI0617 00:56:22.531377       1 service.go:421] Adding new service port \"resourcequota-6098/test-service\" at 100.70.155.251:80/TCP\nI0617 00:56:22.531443       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:22.609341       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.954136ms\"\nI0617 00:56:22.702110       1 service.go:306] Service resourcequota-6098/test-service-np updated: 1 ports\nI0617 00:56:22.702158       1 service.go:421] Adding new service port \"resourcequota-6098/test-service-np\" at 100.69.169.8:80/TCP\nI0617 00:56:22.702223       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:22.730506       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for resourcequota-6098/test-service-np\\\" (:31353/tcp4)\"\nI0617 00:56:22.736156       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.992226ms\"\nI0617 00:56:25.156159       1 service.go:306] Service resourcequota-6098/test-service updated: 0 ports\nI0617 00:56:25.156215       1 service.go:446] Removing service port \"resourcequota-6098/test-service\"\nI0617 00:56:25.156292       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:25.193049       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.823657ms\"\nI0617 00:56:25.310010       1 service.go:306] Service resourcequota-6098/test-service-np updated: 0 ports\nI0617 00:56:25.310053       1 service.go:446] Removing service port \"resourcequota-6098/test-service-np\"\nI0617 00:56:25.310272       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:25.345918       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.855987ms\"\nI0617 00:56:26.837397       1 service.go:306] Service provisioning-5901-6678/csi-hostpath-attacher updated: 1 ports\nI0617 00:56:26.837452       1 service.go:421] Adding new service port \"provisioning-5901-6678/csi-hostpath-attacher:dummy\" at 100.65.237.147:12345/TCP\nI0617 00:56:26.837521       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:26.871879       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.42198ms\"\nI0617 00:56:27.291174       1 service.go:306] Service provisioning-5901-6678/csi-hostpathplugin updated: 1 ports\nI0617 00:56:27.291228       1 service.go:421] Adding new service port \"provisioning-5901-6678/csi-hostpathplugin:dummy\" at 100.64.115.153:12345/TCP\nI0617 00:56:27.291303       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:27.324055       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.825087ms\"\nI0617 00:56:27.586355       1 service.go:306] Service provisioning-5901-6678/csi-hostpath-provisioner updated: 1 ports\nI0617 00:56:27.881324       1 service.go:306] Service provisioning-5901-6678/csi-hostpath-resizer updated: 1 ports\nI0617 00:56:28.183823       1 service.go:306] Service provisioning-5901-6678/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:56:28.184081       1 service.go:421] Adding new service port \"provisioning-5901-6678/csi-hostpath-resizer:dummy\" at 100.66.14.0:12345/TCP\nI0617 00:56:28.184098       1 service.go:421] Adding new service port \"provisioning-5901-6678/csi-hostpath-snapshotter:dummy\" at 100.70.187.128:12345/TCP\nI0617 00:56:28.184108       1 service.go:421] Adding new service port \"provisioning-5901-6678/csi-hostpath-provisioner:dummy\" at 100.70.91.100:12345/TCP\nI0617 00:56:28.184186       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:28.225031       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.954022ms\"\nI0617 00:56:28.336096       1 service.go:306] Service proxy-2698/proxy-service-pwnds updated: 4 ports\nI0617 00:56:29.225736       1 service.go:421] Adding new service port \"proxy-2698/proxy-service-pwnds:portname1\" at 100.70.145.127:80/TCP\nI0617 00:56:29.225767       1 service.go:421] Adding new service port \"proxy-2698/proxy-service-pwnds:portname2\" at 100.70.145.127:81/TCP\nI0617 00:56:29.225776       1 service.go:421] Adding new service port \"proxy-2698/proxy-service-pwnds:tlsportname1\" at 100.70.145.127:443/TCP\nI0617 00:56:29.225786       1 service.go:421] Adding new service port \"proxy-2698/proxy-service-pwnds:tlsportname2\" at 100.70.145.127:444/TCP\nI0617 00:56:29.225871       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:29.267424       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.734003ms\"\nI0617 00:56:30.268099       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:30.350135       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.125954ms\"\nI0617 00:56:31.178807       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:31.219513       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.034043ms\"\nI0617 00:56:32.220556       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:32.256204       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.911257ms\"\nI0617 00:56:33.256654       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:33.293057       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.59377ms\"\nI0617 00:56:34.293986       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:34.343938       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.054762ms\"\nI0617 00:56:35.598821       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:35.644108       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.367626ms\"\nI0617 00:56:42.308800       1 service.go:306] Service volume-242-7148/csi-hostpath-attacher updated: 0 ports\nI0617 00:56:42.308840       1 service.go:446] Removing service port \"volume-242-7148/csi-hostpath-attacher:dummy\"\nI0617 00:56:42.308916       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:42.343642       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.791102ms\"\nI0617 00:56:42.343792       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:42.376996       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.31005ms\"\nI0617 00:56:42.754593       1 service.go:306] Service volume-242-7148/csi-hostpathplugin updated: 0 ports\nI0617 00:56:43.052934       1 service.go:306] Service volume-242-7148/csi-hostpath-provisioner updated: 0 ports\nI0617 00:56:43.347256       1 service.go:306] Service volume-242-7148/csi-hostpath-resizer updated: 0 ports\nI0617 00:56:43.347301       1 service.go:446] Removing service port \"volume-242-7148/csi-hostpath-resizer:dummy\"\nI0617 00:56:43.347317       1 service.go:446] Removing service port \"volume-242-7148/csi-hostpathplugin:dummy\"\nI0617 00:56:43.347325       1 service.go:446] Removing service port \"volume-242-7148/csi-hostpath-provisioner:dummy\"\nI0617 00:56:43.347405       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:43.380174       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.862409ms\"\nI0617 00:56:43.652459       1 service.go:306] Service volume-242-7148/csi-hostpath-snapshotter updated: 0 ports\nW0617 00:56:44.333317       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI0617 00:56:44.380464       1 service.go:446] Removing service port \"volume-242-7148/csi-hostpath-snapshotter:dummy\"\nI0617 00:56:44.380566       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:44.414324       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.867155ms\"\nI0617 00:56:48.770546       1 service.go:306] Service volume-expand-1143-9758/csi-hostpath-attacher updated: 0 ports\nI0617 00:56:48.770584       1 service.go:446] Removing service port \"volume-expand-1143-9758/csi-hostpath-attacher:dummy\"\nI0617 00:56:48.770640       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:48.808602       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.007333ms\"\nI0617 00:56:48.808956       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:48.844466       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.824684ms\"\nI0617 00:56:49.229919       1 service.go:306] Service volume-expand-1143-9758/csi-hostpathplugin updated: 0 ports\nI0617 00:56:49.556478       1 service.go:306] Service volume-expand-1143-9758/csi-hostpath-provisioner updated: 0 ports\nI0617 00:56:49.844927       1 service.go:446] Removing service port \"volume-expand-1143-9758/csi-hostpathplugin:dummy\"\nI0617 00:56:49.844973       1 service.go:446] Removing service port \"volume-expand-1143-9758/csi-hostpath-provisioner:dummy\"\nI0617 00:56:49.845085       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:49.879986       1 service.go:306] Service volume-expand-1143-9758/csi-hostpath-resizer updated: 0 ports\nI0617 00:56:49.880634       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.789368ms\"\nI0617 00:56:50.223118       1 service.go:306] Service volume-expand-1143-9758/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:56:50.747573       1 service.go:306] Service proxy-2698/proxy-service-pwnds updated: 0 ports\nI0617 00:56:50.881639       1 service.go:446] Removing service port \"volume-expand-1143-9758/csi-hostpath-resizer:dummy\"\nI0617 00:56:50.881692       1 service.go:446] Removing service port \"volume-expand-1143-9758/csi-hostpath-snapshotter:dummy\"\nI0617 00:56:50.881700       1 service.go:446] Removing service port \"proxy-2698/proxy-service-pwnds:portname1\"\nI0617 00:56:50.881705       1 service.go:446] Removing service port \"proxy-2698/proxy-service-pwnds:portname2\"\nI0617 00:56:50.881708       1 service.go:446] Removing service port \"proxy-2698/proxy-service-pwnds:tlsportname1\"\nI0617 00:56:50.881712       1 service.go:446] Removing service port \"proxy-2698/proxy-service-pwnds:tlsportname2\"\nI0617 00:56:50.881794       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:50.915445       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.808068ms\"\nI0617 00:56:51.028514       1 service.go:306] Service provisioning-5901-6678/csi-hostpath-attacher updated: 0 ports\nI0617 00:56:51.471455       1 service.go:306] Service provisioning-5901-6678/csi-hostpathplugin updated: 0 ports\nI0617 00:56:51.770968       1 service.go:306] Service provisioning-5901-6678/csi-hostpath-provisioner updated: 0 ports\nI0617 00:56:51.771008       1 service.go:446] Removing service port \"provisioning-5901-6678/csi-hostpath-attacher:dummy\"\nI0617 00:56:51.771023       1 service.go:446] Removing service port \"provisioning-5901-6678/csi-hostpathplugin:dummy\"\nI0617 00:56:51.771031       1 service.go:446] Removing service port \"provisioning-5901-6678/csi-hostpath-provisioner:dummy\"\nI0617 00:56:51.771106       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:51.807692       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.672146ms\"\nI0617 00:56:52.071303       1 service.go:306] Service provisioning-5901-6678/csi-hostpath-resizer updated: 0 ports\nI0617 00:56:52.377780       1 service.go:306] Service provisioning-5901-6678/csi-hostpath-snapshotter updated: 0 ports\nI0617 00:56:52.808308       1 service.go:446] Removing service port \"provisioning-5901-6678/csi-hostpath-resizer:dummy\"\nI0617 00:56:52.808348       1 service.go:446] Removing service port \"provisioning-5901-6678/csi-hostpath-snapshotter:dummy\"\nI0617 00:56:52.808434       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:56:52.867820       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.514894ms\"\nI0617 00:57:11.896570       1 service.go:306] Service conntrack-8682/boom-server updated: 1 ports\nI0617 00:57:11.896640       1 service.go:421] Adding new service port \"conntrack-8682/boom-server\" at 100.71.226.27:9000/TCP\nI0617 00:57:11.896692       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:57:11.927570       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.926223ms\"\nI0617 00:57:11.927749       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:57:11.960169       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.558337ms\"\nI0617 00:57:19.141889       1 service.go:306] Service webhook-2785/e2e-test-webhook updated: 1 ports\nI0617 00:57:19.141969       1 service.go:421] Adding new service port \"webhook-2785/e2e-test-webhook\" at 100.70.22.219:8443/TCP\nI0617 00:57:19.142027       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:57:19.181882       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.909877ms\"\nI0617 00:57:19.182003       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:57:19.212153       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.232114ms\"\nI0617 00:57:24.749409       1 service.go:306] Service webhook-2785/e2e-test-webhook updated: 0 ports\nI0617 00:57:24.749450       1 service.go:446] Removing service port \"webhook-2785/e2e-test-webhook\"\nI0617 00:57:24.749516       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:57:24.782017       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.549866ms\"\nI0617 00:57:24.808504       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:57:24.842624       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.163141ms\"\nI0617 00:57:52.566477       1 service.go:306] Service apply-5537/test-svc updated: 1 ports\nI0617 00:57:52.566521       1 service.go:421] Adding new service port \"apply-5537/test-svc\" at 100.71.146.212:8080/UDP\nI0617 00:57:52.566598       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:57:52.606728       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.197409ms\"\nI0617 00:57:58.187853       1 service.go:306] Service apply-5537/test-svc updated: 0 ports\nI0617 00:57:58.187896       1 service.go:446] Removing service port \"apply-5537/test-svc\"\nI0617 00:57:58.187961       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:57:58.221578       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.674486ms\"\nI0617 00:58:13.140465       1 service.go:306] Service conntrack-2451/svc-udp updated: 1 ports\nI0617 00:58:13.140551       1 service.go:421] Adding new service port \"conntrack-2451/svc-udp:udp\" at 100.70.84.193:80/UDP\nI0617 00:58:13.140615       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:13.171782       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for conntrack-2451/svc-udp:udp\\\" (:31959/udp4)\"\nI0617 00:58:13.176190       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.676377ms\"\nI0617 00:58:13.176390       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:13.212916       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.684418ms\"\nI0617 00:58:20.059764       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-2451/svc-udp:udp\" clusterIP=\"100.70.84.193\"\nI0617 00:58:20.059839       1 proxier.go:848] Stale udp service NodePort conntrack-2451/svc-udp:udp -> 31959\nI0617 00:58:20.059866       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:20.108278       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.651887ms\"\nI0617 00:58:22.541702       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:22.572899       1 service.go:306] Service conntrack-8682/boom-server updated: 0 ports\nI0617 00:58:22.586159       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.510274ms\"\nI0617 00:58:22.586190       1 service.go:446] Removing service port \"conntrack-8682/boom-server\"\nI0617 00:58:22.586241       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:22.629342       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.138077ms\"\nI0617 00:58:23.629496       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:23.660069       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.607299ms\"\nI0617 00:58:29.630081       1 service.go:306] Service webhook-5792/e2e-test-webhook updated: 1 ports\nI0617 00:58:29.630134       1 service.go:421] Adding new service port \"webhook-5792/e2e-test-webhook\" at 100.67.99.15:8443/TCP\nI0617 00:58:29.630189       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:29.665394       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.255972ms\"\nI0617 00:58:29.665578       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:29.697621       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.180922ms\"\nI0617 00:58:31.004461       1 service.go:306] Service kubectl-3896/agnhost-primary updated: 1 ports\nI0617 00:58:31.004510       1 service.go:421] Adding new service port \"kubectl-3896/agnhost-primary\" at 100.71.42.237:6379/TCP\nI0617 00:58:31.004551       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:31.038764       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.245461ms\"\nI0617 00:58:31.984759       1 service.go:306] Service webhook-5792/e2e-test-webhook updated: 0 ports\nI0617 00:58:31.984793       1 service.go:446] Removing service port \"webhook-5792/e2e-test-webhook\"\nI0617 00:58:31.984850       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:32.020928       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.116287ms\"\nI0617 00:58:33.022905       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:33.055173       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.330794ms\"\nI0617 00:58:33.506803       1 service.go:306] Service dns-5298/test-service-2 updated: 1 ports\nI0617 00:58:34.055796       1 service.go:421] Adding new service port \"dns-5298/test-service-2:http\" at 100.71.71.114:80/TCP\nI0617 00:58:34.055878       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:34.088443       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.676464ms\"\nI0617 00:58:34.912109       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:34.944454       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.389129ms\"\nI0617 00:58:36.473046       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:36.507607       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.611643ms\"\nI0617 00:58:38.130954       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:38.177440       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.545241ms\"\nI0617 00:58:38.378310       1 service.go:306] Service kubectl-3896/agnhost-primary updated: 0 ports\nI0617 00:58:38.378356       1 service.go:446] Removing service port \"kubectl-3896/agnhost-primary\"\nI0617 00:58:38.378412       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:38.410686       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.31357ms\"\nI0617 00:58:39.410858       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:39.444352       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.56042ms\"\nI0617 00:58:49.410745       1 service.go:306] Service endpointslice-5899/example-empty-selector updated: 1 ports\nI0617 00:58:49.410794       1 service.go:421] Adding new service port \"endpointslice-5899/example-empty-selector:example\" at 100.64.95.74:80/TCP\nI0617 00:58:49.410855       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:49.442926       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.127091ms\"\nI0617 00:58:49.443140       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:49.475951       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.983359ms\"\nI0617 00:58:49.773738       1 service.go:306] Service conntrack-2451/svc-udp updated: 0 ports\nI0617 00:58:49.866227       1 service.go:306] Service endpointslice-5899/example-empty-selector updated: 0 ports\nI0617 00:58:50.476111       1 service.go:446] Removing service port \"conntrack-2451/svc-udp:udp\"\nI0617 00:58:50.476182       1 service.go:446] Removing service port \"endpointslice-5899/example-empty-selector:example\"\nI0617 00:58:50.476671       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:58:50.523257       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.152454ms\"\nI0617 00:59:02.484862       1 service.go:306] Service webhook-5465/e2e-test-webhook updated: 1 ports\nI0617 00:59:02.484908       1 service.go:421] Adding new service port \"webhook-5465/e2e-test-webhook\" at 100.68.115.139:8443/TCP\nI0617 00:59:02.484964       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:02.545483       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.557083ms\"\nI0617 00:59:02.545588       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:02.622827       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.301334ms\"\nI0617 00:59:04.642957       1 service.go:306] Service webhook-5465/e2e-test-webhook updated: 0 ports\nI0617 00:59:04.643000       1 service.go:446] Removing service port \"webhook-5465/e2e-test-webhook\"\nI0617 00:59:04.643059       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:04.675003       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.990538ms\"\nI0617 00:59:04.675217       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:04.711923       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.878806ms\"\nI0617 00:59:12.031467       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:12.092058       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.640283ms\"\nI0617 00:59:12.169059       1 service.go:306] Service dns-5298/test-service-2 updated: 0 ports\nI0617 00:59:12.169099       1 service.go:446] Removing service port \"dns-5298/test-service-2:http\"\nI0617 00:59:12.169153       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:12.201477       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.366979ms\"\nI0617 00:59:13.201796       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:13.235485       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.754951ms\"\nI0617 00:59:13.747234       1 service.go:306] Service services-3436/externalname-service updated: 1 ports\nI0617 00:59:14.235649       1 service.go:421] Adding new service port \"services-3436/externalname-service:http\" at 100.69.27.239:80/TCP\nI0617 00:59:14.235725       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:14.272249       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-3436/externalname-service:http\\\" (:30473/tcp4)\"\nI0617 00:59:14.277604       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.975731ms\"\nI0617 00:59:15.552321       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:15.584612       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.34199ms\"\nI0617 00:59:17.075427       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:17.108402       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.025147ms\"\nW0617 00:59:35.225775       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ings2gq6\nW0617 00:59:35.371150       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingjgwnc\nW0617 00:59:35.515742       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingbxp8d\nW0617 00:59:36.384817       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingbxp8d\nI0617 00:59:36.428137       1 service.go:306] Service webhook-6061/e2e-test-webhook updated: 1 ports\nI0617 00:59:36.428183       1 service.go:421] Adding new service port \"webhook-6061/e2e-test-webhook\" at 100.69.232.239:8443/TCP\nI0617 00:59:36.428245       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:36.463746       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.560558ms\"\nI0617 00:59:36.463948       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:36.495310       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.520767ms\"\nW0617 00:59:36.674499       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingbxp8d\nW0617 00:59:36.820270       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingbxp8d\nW0617 00:59:37.254910       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingjgwnc\nW0617 00:59:37.256655       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ings2gq6\nI0617 00:59:40.646013       1 service.go:306] Service webhook-6061/e2e-test-webhook updated: 0 ports\nI0617 00:59:40.646055       1 service.go:446] Removing service port \"webhook-6061/e2e-test-webhook\"\nI0617 00:59:40.646123       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:40.696723       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.641092ms\"\nI0617 00:59:40.696843       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:40.756530       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.746693ms\"\nI0617 00:59:41.230415       1 service.go:306] Service services-3436/externalname-service updated: 0 ports\nI0617 00:59:41.756702       1 service.go:446] Removing service port \"services-3436/externalname-service:http\"\nI0617 00:59:41.756824       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:41.796802       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.097932ms\"\nI0617 00:59:46.326312       1 service.go:306] Service volumemode-1166-51/csi-hostpath-attacher updated: 1 ports\nI0617 00:59:46.326358       1 service.go:421] Adding new service port \"volumemode-1166-51/csi-hostpath-attacher:dummy\" at 100.68.5.198:12345/TCP\nI0617 00:59:46.326417       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:46.362771       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.406188ms\"\nI0617 00:59:46.362954       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:46.403577       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.682285ms\"\nI0617 00:59:46.775604       1 service.go:306] Service volumemode-1166-51/csi-hostpathplugin updated: 1 ports\nI0617 00:59:47.066687       1 service.go:306] Service volumemode-1166-51/csi-hostpath-provisioner updated: 1 ports\nI0617 00:59:47.358274       1 service.go:306] Service volumemode-1166-51/csi-hostpath-resizer updated: 1 ports\nI0617 00:59:47.358316       1 service.go:421] Adding new service port \"volumemode-1166-51/csi-hostpathplugin:dummy\" at 100.68.140.124:12345/TCP\nI0617 00:59:47.358332       1 service.go:421] Adding new service port \"volumemode-1166-51/csi-hostpath-provisioner:dummy\" at 100.67.17.222:12345/TCP\nI0617 00:59:47.358342       1 service.go:421] Adding new service port \"volumemode-1166-51/csi-hostpath-resizer:dummy\" at 100.69.90.188:12345/TCP\nI0617 00:59:47.358400       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:47.396979       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.659005ms\"\nI0617 00:59:47.649531       1 service.go:306] Service volumemode-1166-51/csi-hostpath-snapshotter updated: 1 ports\nI0617 00:59:48.397760       1 service.go:421] Adding new service port \"volumemode-1166-51/csi-hostpath-snapshotter:dummy\" at 100.65.163.117:12345/TCP\nI0617 00:59:48.397853       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:48.438176       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.45228ms\"\nI0617 00:59:49.439307       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:49.470198       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.955937ms\"\nI0617 00:59:50.864043       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:50.902826       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.845457ms\"\nI0617 00:59:51.870526       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:51.909057       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.569618ms\"\nI0617 00:59:52.459955       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:52.491893       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.974426ms\"\nI0617 00:59:53.492912       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 00:59:53.542664       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.83673ms\"\nI0617 01:00:04.674936       1 service.go:306] Service volume-provisioning-6956/glusterfs-dynamic-296f4a53-197d-41ee-b2d3-88675c388bcc updated: 1 ports\nI0617 01:00:04.674979       1 service.go:421] Adding new service port \"volume-provisioning-6956/glusterfs-dynamic-296f4a53-197d-41ee-b2d3-88675c388bcc\" at 100.68.193.129:1/TCP\nI0617 01:00:04.675037       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:04.725596       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.610705ms\"\nI0617 01:00:05.103627       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:05.137523       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.94092ms\"\nI0617 01:00:07.827753       1 service.go:306] Service volume-provisioning-6956/glusterfs-dynamic-296f4a53-197d-41ee-b2d3-88675c388bcc updated: 0 ports\nI0617 01:00:07.827840       1 service.go:446] Removing service port \"volume-provisioning-6956/glusterfs-dynamic-296f4a53-197d-41ee-b2d3-88675c388bcc\"\nI0617 01:00:07.827908       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:07.868201       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.350942ms\"\nI0617 01:00:07.868302       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:07.902408       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.164708ms\"\nI0617 01:00:25.191553       1 service.go:306] Service volumemode-1166-51/csi-hostpath-attacher updated: 0 ports\nI0617 01:00:25.191591       1 service.go:446] Removing service port \"volumemode-1166-51/csi-hostpath-attacher:dummy\"\nI0617 01:00:25.191649       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:25.234536       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.934219ms\"\nI0617 01:00:25.234741       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:25.283458       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.877731ms\"\nI0617 01:00:25.630130       1 service.go:306] Service volumemode-1166-51/csi-hostpathplugin updated: 0 ports\nI0617 01:00:25.926230       1 service.go:306] Service volumemode-1166-51/csi-hostpath-provisioner updated: 0 ports\nI0617 01:00:26.227727       1 service.go:306] Service volumemode-1166-51/csi-hostpath-resizer updated: 0 ports\nI0617 01:00:26.227780       1 service.go:446] Removing service port \"volumemode-1166-51/csi-hostpath-resizer:dummy\"\nI0617 01:00:26.227795       1 service.go:446] Removing service port \"volumemode-1166-51/csi-hostpathplugin:dummy\"\nI0617 01:00:26.227805       1 service.go:446] Removing service port \"volumemode-1166-51/csi-hostpath-provisioner:dummy\"\nI0617 01:00:26.227904       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:26.259110       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.318623ms\"\nI0617 01:00:26.524846       1 service.go:306] Service volumemode-1166-51/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:00:27.260198       1 service.go:446] Removing service port \"volumemode-1166-51/csi-hostpath-snapshotter:dummy\"\nI0617 01:00:27.260329       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:27.291119       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.924389ms\"\nI0617 01:00:37.577118       1 service.go:306] Service services-4141/up-down-1 updated: 1 ports\nI0617 01:00:37.577194       1 service.go:421] Adding new service port \"services-4141/up-down-1\" at 100.70.94.14:80/TCP\nI0617 01:00:37.577273       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:37.614759       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.55795ms\"\nI0617 01:00:37.614840       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:37.657852       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.03685ms\"\nI0617 01:00:40.508816       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:40.549822       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.031476ms\"\nI0617 01:00:41.582590       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:41.631970       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.430119ms\"\nI0617 01:00:42.169033       1 service.go:306] Service provisioning-19-3436/csi-hostpath-attacher updated: 1 ports\nI0617 01:00:42.169076       1 service.go:421] Adding new service port \"provisioning-19-3436/csi-hostpath-attacher:dummy\" at 100.65.149.159:12345/TCP\nI0617 01:00:42.169135       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:42.218834       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.745156ms\"\nI0617 01:00:42.616660       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:42.617014       1 service.go:306] Service provisioning-19-3436/csi-hostpathplugin updated: 1 ports\nI0617 01:00:42.675737       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.132603ms\"\nI0617 01:00:42.902378       1 service.go:306] Service provisioning-19-3436/csi-hostpath-provisioner updated: 1 ports\nI0617 01:00:43.198719       1 service.go:306] Service provisioning-19-3436/csi-hostpath-resizer updated: 1 ports\nI0617 01:00:43.493847       1 service.go:306] Service provisioning-19-3436/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:00:43.676467       1 service.go:421] Adding new service port \"provisioning-19-3436/csi-hostpath-snapshotter:dummy\" at 100.65.66.39:12345/TCP\nI0617 01:00:43.676529       1 service.go:421] Adding new service port \"provisioning-19-3436/csi-hostpathplugin:dummy\" at 100.70.130.62:12345/TCP\nI0617 01:00:43.676541       1 service.go:421] Adding new service port \"provisioning-19-3436/csi-hostpath-provisioner:dummy\" at 100.66.152.199:12345/TCP\nI0617 01:00:43.676552       1 service.go:421] Adding new service port \"provisioning-19-3436/csi-hostpath-resizer:dummy\" at 100.67.184.226:12345/TCP\nI0617 01:00:43.676654       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:43.708308       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.861314ms\"\nI0617 01:00:44.167485       1 service.go:306] Service services-4141/up-down-2 updated: 1 ports\nI0617 01:00:44.709149       1 service.go:421] Adding new service port \"services-4141/up-down-2\" at 100.65.50.4:80/TCP\nI0617 01:00:44.709244       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:44.781242       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.111062ms\"\nI0617 01:00:45.947481       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:45.979228       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.798515ms\"\nI0617 01:00:46.740319       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:46.772478       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.213682ms\"\nI0617 01:00:47.495798       1 service.go:306] Service volumemode-6563-5143/csi-hostpath-attacher updated: 1 ports\nI0617 01:00:47.773609       1 service.go:421] Adding new service port \"volumemode-6563-5143/csi-hostpath-attacher:dummy\" at 100.70.224.140:12345/TCP\nI0617 01:00:47.773717       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:47.830448       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.851591ms\"\nI0617 01:00:47.936672       1 service.go:306] Service volumemode-6563-5143/csi-hostpathplugin updated: 1 ports\nI0617 01:00:48.230310       1 service.go:306] Service volumemode-6563-5143/csi-hostpath-provisioner updated: 1 ports\nI0617 01:00:48.528052       1 service.go:306] Service volumemode-6563-5143/csi-hostpath-resizer updated: 1 ports\nI0617 01:00:48.824134       1 service.go:306] Service volumemode-6563-5143/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:00:48.824179       1 service.go:421] Adding new service port \"volumemode-6563-5143/csi-hostpath-resizer:dummy\" at 100.71.95.22:12345/TCP\nI0617 01:00:48.824196       1 service.go:421] Adding new service port \"volumemode-6563-5143/csi-hostpath-snapshotter:dummy\" at 100.67.168.253:12345/TCP\nI0617 01:00:48.824209       1 service.go:421] Adding new service port \"volumemode-6563-5143/csi-hostpathplugin:dummy\" at 100.69.255.129:12345/TCP\nI0617 01:00:48.824225       1 service.go:421] Adding new service port \"volumemode-6563-5143/csi-hostpath-provisioner:dummy\" at 100.66.58.111:12345/TCP\nI0617 01:00:48.824319       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:48.876229       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.045829ms\"\nI0617 01:00:49.877103       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:49.956471       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.44935ms\"\nI0617 01:00:50.701350       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:50.767672       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.38718ms\"\nI0617 01:00:51.704992       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:51.737448       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.516748ms\"\nI0617 01:00:52.712828       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:52.747316       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.57092ms\"\nI0617 01:00:53.748189       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:53.786244       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.134757ms\"\nI0617 01:00:54.787152       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:54.820411       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.360117ms\"\nI0617 01:00:55.820821       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:55.865777       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.036036ms\"\nI0617 01:00:56.866527       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:00:56.901536       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.096898ms\"\nI0617 01:01:13.937417       1 service.go:306] Service services-6281/clusterip-service updated: 1 ports\nI0617 01:01:13.937473       1 service.go:421] Adding new service port \"services-6281/clusterip-service\" at 100.65.212.247:80/TCP\nI0617 01:01:13.937569       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:14.001997       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.520061ms\"\nI0617 01:01:14.002104       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:14.056066       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.029586ms\"\nI0617 01:01:14.092420       1 service.go:306] Service services-6281/externalsvc updated: 1 ports\nI0617 01:01:14.790244       1 service.go:306] Service provisioning-19-3436/csi-hostpath-attacher updated: 0 ports\nI0617 01:01:15.056883       1 service.go:446] Removing service port \"provisioning-19-3436/csi-hostpath-attacher:dummy\"\nI0617 01:01:15.056933       1 service.go:421] Adding new service port \"services-6281/externalsvc\" at 100.68.178.88:80/TCP\nI0617 01:01:15.057060       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:15.110571       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.690236ms\"\nI0617 01:01:15.232328       1 service.go:306] Service provisioning-19-3436/csi-hostpathplugin updated: 0 ports\nI0617 01:01:15.544335       1 service.go:306] Service provisioning-19-3436/csi-hostpath-provisioner updated: 0 ports\nI0617 01:01:15.848078       1 service.go:306] Service provisioning-19-3436/csi-hostpath-resizer updated: 0 ports\nI0617 01:01:16.111028       1 service.go:446] Removing service port \"provisioning-19-3436/csi-hostpath-provisioner:dummy\"\nI0617 01:01:16.111066       1 service.go:446] Removing service port \"provisioning-19-3436/csi-hostpath-resizer:dummy\"\nI0617 01:01:16.111077       1 service.go:446] Removing service port \"provisioning-19-3436/csi-hostpathplugin:dummy\"\nI0617 01:01:16.111199       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:16.145417       1 service.go:306] Service provisioning-19-3436/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:01:16.148489       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.476103ms\"\nI0617 01:01:17.149156       1 service.go:446] Removing service port \"provisioning-19-3436/csi-hostpath-snapshotter:dummy\"\nI0617 01:01:17.149302       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:17.191319       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.16834ms\"\nI0617 01:01:18.191796       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:18.226256       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.539465ms\"\nI0617 01:01:20.115956       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:20.163233       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.390446ms\"\nI0617 01:01:20.822700       1 service.go:306] Service services-6281/clusterip-service updated: 0 ports\nI0617 01:01:20.822745       1 service.go:446] Removing service port \"services-6281/clusterip-service\"\nI0617 01:01:20.822881       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:20.860369       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.613781ms\"\nI0617 01:01:21.860987       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:21.900751       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.838555ms\"\nI0617 01:01:22.234239       1 service.go:306] Service services-6222/nodeport-update-service updated: 1 ports\nI0617 01:01:22.234290       1 service.go:421] Adding new service port \"services-6222/nodeport-update-service\" at 100.64.216.77:80/TCP\nI0617 01:01:22.234367       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:22.267363       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.067956ms\"\nI0617 01:01:22.525035       1 service.go:306] Service services-6222/nodeport-update-service updated: 1 ports\nI0617 01:01:23.268187       1 service.go:421] Adding new service port \"services-6222/nodeport-update-service:tcp-port\" at 100.64.216.77:80/TCP\nI0617 01:01:23.268219       1 service.go:446] Removing service port \"services-6222/nodeport-update-service\"\nI0617 01:01:23.268343       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:23.300366       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-6222/nodeport-update-service:tcp-port\\\" (:30521/tcp4)\"\nI0617 01:01:23.305441       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.301489ms\"\nI0617 01:01:24.517072       1 service.go:306] Service volumemode-6563-5143/csi-hostpath-attacher updated: 0 ports\nI0617 01:01:24.517120       1 service.go:446] Removing service port \"volumemode-6563-5143/csi-hostpath-attacher:dummy\"\nI0617 01:01:24.517205       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:24.554009       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.878806ms\"\nI0617 01:01:25.098351       1 service.go:306] Service volumemode-6563-5143/csi-hostpathplugin updated: 0 ports\nI0617 01:01:25.152984       1 service.go:446] Removing service port \"volumemode-6563-5143/csi-hostpathplugin:dummy\"\nI0617 01:01:25.153208       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:25.205744       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.755832ms\"\nI0617 01:01:25.481230       1 service.go:306] Service volumemode-6563-5143/csi-hostpath-provisioner updated: 0 ports\nI0617 01:01:25.791811       1 service.go:306] Service volumemode-6563-5143/csi-hostpath-resizer updated: 0 ports\nI0617 01:01:26.102810       1 service.go:306] Service volumemode-6563-5143/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:01:26.206685       1 service.go:446] Removing service port \"volumemode-6563-5143/csi-hostpath-provisioner:dummy\"\nI0617 01:01:26.206719       1 service.go:446] Removing service port \"volumemode-6563-5143/csi-hostpath-resizer:dummy\"\nI0617 01:01:26.206727       1 service.go:446] Removing service port \"volumemode-6563-5143/csi-hostpath-snapshotter:dummy\"\nI0617 01:01:26.206821       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:26.255825       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.1434ms\"\nI0617 01:01:28.114698       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:28.161431       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.774475ms\"\nI0617 01:01:29.359918       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:29.416206       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.435105ms\"\nI0617 01:01:29.416379       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:29.458013       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.764907ms\"\nI0617 01:01:33.804598       1 service.go:306] Service services-4141/up-down-1 updated: 0 ports\nI0617 01:01:33.804641       1 service.go:446] Removing service port \"services-4141/up-down-1\"\nI0617 01:01:33.804718       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:33.840142       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.4905ms\"\nI0617 01:01:33.840257       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:33.877708       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.522789ms\"\nI0617 01:01:43.780343       1 service.go:306] Service services-6281/externalsvc updated: 0 ports\nI0617 01:01:43.780380       1 service.go:446] Removing service port \"services-6281/externalsvc\"\nI0617 01:01:43.780455       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:43.815220       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.828374ms\"\nI0617 01:01:43.815313       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:43.848435       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.172927ms\"\nI0617 01:01:46.655226       1 service.go:306] Service services-6222/nodeport-update-service updated: 2 ports\nI0617 01:01:46.655278       1 service.go:421] Adding new service port \"services-6222/nodeport-update-service:udp-port\" at 100.64.216.77:80/UDP\nI0617 01:01:46.655295       1 service.go:423] Updating existing service port \"services-6222/nodeport-update-service:tcp-port\" at 100.64.216.77:80/TCP\nI0617 01:01:46.655365       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:46.779816       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-6222/nodeport-update-service:tcp-port\\\" (:31038/tcp4)\"\nI0617 01:01:46.780139       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-6222/nodeport-update-service:udp-port\\\" (:31996/udp4)\"\nI0617 01:01:46.792085       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"136.796042ms\"\nI0617 01:01:46.792288       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"services-6222/nodeport-update-service:udp-port\" clusterIP=\"100.64.216.77\"\nI0617 01:01:46.792448       1 proxier.go:848] Stale udp service NodePort services-6222/nodeport-update-service:udp-port -> 31996\nI0617 01:01:46.792488       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:46.878482       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.35062ms\"\nI0617 01:01:51.188863       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-attacher updated: 1 ports\nI0617 01:01:51.188909       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpath-attacher:dummy\" at 100.67.88.27:12345/TCP\nI0617 01:01:51.188984       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:51.222987       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.072524ms\"\nI0617 01:01:51.223085       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:51.260725       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.689608ms\"\nI0617 01:01:51.628493       1 service.go:306] Service volume-expand-7844-2168/csi-hostpathplugin updated: 1 ports\nI0617 01:01:51.927950       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-provisioner updated: 1 ports\nI0617 01:01:51.953110       1 service.go:306] Service volume-9937-5615/csi-hostpath-attacher updated: 1 ports\nI0617 01:01:52.226360       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-resizer updated: 1 ports\nI0617 01:01:52.226403       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpath-resizer:dummy\" at 100.69.245.49:12345/TCP\nI0617 01:01:52.226421       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpathplugin:dummy\" at 100.71.146.126:12345/TCP\nI0617 01:01:52.226432       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpath-provisioner:dummy\" at 100.64.170.2:12345/TCP\nI0617 01:01:52.226443       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpath-attacher:dummy\" at 100.65.125.217:12345/TCP\nI0617 01:01:52.226515       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:52.262586       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.172428ms\"\nI0617 01:01:52.405702       1 service.go:306] Service volume-9937-5615/csi-hostpathplugin updated: 1 ports\nI0617 01:01:52.486561       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-attacher updated: 1 ports\nI0617 01:01:52.530458       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:01:52.714829       1 service.go:306] Service volume-9937-5615/csi-hostpath-provisioner updated: 1 ports\nI0617 01:01:52.934713       1 service.go:306] Service ephemeral-9221-7051/csi-hostpathplugin updated: 1 ports\nI0617 01:01:53.026093       1 service.go:306] Service volume-9937-5615/csi-hostpath-resizer updated: 1 ports\nI0617 01:01:53.229332       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-provisioner updated: 1 ports\nI0617 01:01:53.229383       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpath-provisioner:dummy\" at 100.70.232.15:12345/TCP\nI0617 01:01:53.229418       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpathplugin:dummy\" at 100.69.236.226:12345/TCP\nI0617 01:01:53.229442       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpath-resizer:dummy\" at 100.68.31.212:12345/TCP\nI0617 01:01:53.229450       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpath-provisioner:dummy\" at 100.69.167.252:12345/TCP\nI0617 01:01:53.229462       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpathplugin:dummy\" at 100.69.51.0:12345/TCP\nI0617 01:01:53.229491       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpath-attacher:dummy\" at 100.69.132.22:12345/TCP\nI0617 01:01:53.229502       1 service.go:421] Adding new service port \"volume-expand-7844-2168/csi-hostpath-snapshotter:dummy\" at 100.67.232.1:12345/TCP\nI0617 01:01:53.229613       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:53.276973       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.568565ms\"\nI0617 01:01:53.318083       1 service.go:306] Service volume-9937-5615/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:01:53.620087       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-resizer updated: 1 ports\nI0617 01:01:53.917572       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:01:54.277271       1 service.go:421] Adding new service port \"volume-9937-5615/csi-hostpath-snapshotter:dummy\" at 100.70.70.28:12345/TCP\nI0617 01:01:54.277305       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpath-resizer:dummy\" at 100.65.78.170:12345/TCP\nI0617 01:01:54.277317       1 service.go:421] Adding new service port \"ephemeral-9221-7051/csi-hostpath-snapshotter:dummy\" at 100.71.65.253:12345/TCP\nI0617 01:01:54.277419       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:54.346808       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.544827ms\"\nI0617 01:01:55.112480       1 service.go:306] Service services-4141/up-down-3 updated: 1 ports\nI0617 01:01:55.347742       1 service.go:421] Adding new service port \"services-4141/up-down-3\" at 100.70.237.169:80/TCP\nI0617 01:01:55.347883       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:55.419598       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"71.868464ms\"\nI0617 01:01:56.419960       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:56.463440       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.587826ms\"\nI0617 01:01:59.310895       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:59.379946       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.139238ms\"\nI0617 01:01:59.675927       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:01:59.720832       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.981236ms\"\nI0617 01:02:00.721590       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:00.759494       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.003436ms\"\nI0617 01:02:01.760136       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:01.810069       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.023427ms\"\nI0617 01:02:02.673535       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:02.713178       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.757018ms\"\nI0617 01:02:03.675116       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:03.713115       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.067306ms\"\nI0617 01:02:04.479979       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:04.518772       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.852516ms\"\nI0617 01:02:06.273134       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:06.340325       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.281844ms\"\nI0617 01:02:26.679120       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-attacher updated: 1 ports\nI0617 01:02:26.679184       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpath-attacher:dummy\" at 100.66.3.103:12345/TCP\nI0617 01:02:26.679274       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:26.760281       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"81.086818ms\"\nI0617 01:02:26.760390       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:26.827289       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.961034ms\"\nI0617 01:02:27.118224       1 service.go:306] Service volume-expand-3709-2314/csi-hostpathplugin updated: 1 ports\nI0617 01:02:27.412730       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-provisioner updated: 1 ports\nI0617 01:02:27.706413       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-resizer updated: 1 ports\nI0617 01:02:27.706467       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpathplugin:dummy\" at 100.70.221.112:12345/TCP\nI0617 01:02:27.706487       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpath-provisioner:dummy\" at 100.65.82.37:12345/TCP\nI0617 01:02:27.706500       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpath-resizer:dummy\" at 100.68.145.99:12345/TCP\nI0617 01:02:27.706607       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:27.758523       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.050757ms\"\nI0617 01:02:27.999705       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:02:28.758702       1 service.go:421] Adding new service port \"volume-expand-3709-2314/csi-hostpath-snapshotter:dummy\" at 100.66.134.75:12345/TCP\nI0617 01:02:28.758831       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:28.811588       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.911211ms\"\nI0617 01:02:32.865202       1 service.go:306] Service services-6222/nodeport-update-service updated: 0 ports\nI0617 01:02:32.865248       1 service.go:446] Removing service port \"services-6222/nodeport-update-service:tcp-port\"\nI0617 01:02:32.865261       1 service.go:446] Removing service port \"services-6222/nodeport-update-service:udp-port\"\nI0617 01:02:32.865353       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:32.934990       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.730907ms\"\nI0617 01:02:32.935158       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:32.996974       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.938174ms\"\nI0617 01:02:34.308217       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:34.348239       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.090328ms\"\nI0617 01:02:34.973955       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:35.013539       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.673332ms\"\nW0617 01:02:35.376366       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI0617 01:02:35.561542       1 service.go:306] Service services-4141/up-down-2 updated: 0 ports\nI0617 01:02:35.571786       1 service.go:306] Service services-4141/up-down-3 updated: 0 ports\nI0617 01:02:36.014217       1 service.go:446] Removing service port \"services-4141/up-down-3\"\nI0617 01:02:36.014262       1 service.go:446] Removing service port \"services-4141/up-down-2\"\nI0617 01:02:36.014396       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:36.065043       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.824203ms\"\nI0617 01:02:37.066168       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:37.116022       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.932207ms\"\nI0617 01:02:38.117294       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:38.158186       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.034296ms\"\nI0617 01:02:50.418468       1 service.go:306] Service volume-9937-5615/csi-hostpath-attacher updated: 0 ports\nI0617 01:02:50.418505       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpath-attacher:dummy\"\nI0617 01:02:50.418611       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:50.466953       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.434384ms\"\nI0617 01:02:50.467094       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:50.509814       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.799909ms\"\nI0617 01:02:50.869792       1 service.go:306] Service volume-9937-5615/csi-hostpathplugin updated: 0 ports\nI0617 01:02:51.164711       1 service.go:306] Service volume-9937-5615/csi-hostpath-provisioner updated: 0 ports\nI0617 01:02:51.460238       1 service.go:306] Service volume-9937-5615/csi-hostpath-resizer updated: 0 ports\nI0617 01:02:51.460276       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpathplugin:dummy\"\nI0617 01:02:51.460289       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpath-provisioner:dummy\"\nI0617 01:02:51.460296       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpath-resizer:dummy\"\nI0617 01:02:51.460406       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:51.514578       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.282429ms\"\nI0617 01:02:51.766486       1 service.go:306] Service volume-9937-5615/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:02:52.514691       1 service.go:446] Removing service port \"volume-9937-5615/csi-hostpath-snapshotter:dummy\"\nI0617 01:02:52.514868       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:52.566772       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.086015ms\"\nI0617 01:02:54.069360       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-attacher updated: 0 ports\nI0617 01:02:54.069406       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpath-attacher:dummy\"\nI0617 01:02:54.069519       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:54.126939       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.51821ms\"\nI0617 01:02:54.541762       1 service.go:306] Service ephemeral-9221-7051/csi-hostpathplugin updated: 0 ports\nI0617 01:02:54.541802       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpathplugin:dummy\"\nI0617 01:02:54.541911       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:54.597483       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.664264ms\"\nI0617 01:02:54.848585       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-provisioner updated: 0 ports\nI0617 01:02:55.147668       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-resizer updated: 0 ports\nI0617 01:02:55.456897       1 service.go:306] Service ephemeral-9221-7051/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:02:55.456961       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpath-resizer:dummy\"\nI0617 01:02:55.456975       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpath-snapshotter:dummy\"\nI0617 01:02:55.456983       1 service.go:446] Removing service port \"ephemeral-9221-7051/csi-hostpath-provisioner:dummy\"\nI0617 01:02:55.457118       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:55.494012       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.032737ms\"\nI0617 01:02:56.494207       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:02:56.527665       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.568005ms\"\nI0617 01:03:11.826586       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-attacher updated: 0 ports\nI0617 01:03:11.826631       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpath-attacher:dummy\"\nI0617 01:03:11.826719       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:11.859027       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.384762ms\"\nI0617 01:03:11.868772       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:11.901324       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.611983ms\"\nI0617 01:03:12.294896       1 service.go:306] Service volume-expand-3709-2314/csi-hostpathplugin updated: 0 ports\nI0617 01:03:12.601628       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-provisioner updated: 0 ports\nI0617 01:03:12.901895       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpathplugin:dummy\"\nI0617 01:03:12.901956       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpath-provisioner:dummy\"\nI0617 01:03:12.902063       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:12.923999       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-resizer updated: 0 ports\nI0617 01:03:12.964217       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.320557ms\"\nI0617 01:03:13.225672       1 service.go:306] Service volume-expand-3709-2314/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:03:13.964426       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpath-resizer:dummy\"\nI0617 01:03:13.964474       1 service.go:446] Removing service port \"volume-expand-3709-2314/csi-hostpath-snapshotter:dummy\"\nI0617 01:03:13.964601       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:13.995885       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.460863ms\"\nI0617 01:03:29.393155       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:29.526585       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"133.472444ms\"\nI0617 01:03:30.634316       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:30.668982       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.736251ms\"\nI0617 01:03:31.948367       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:31.983070       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.80471ms\"\nI0617 01:03:35.139896       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-attacher updated: 1 ports\nI0617 01:03:35.139947       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpath-attacher:dummy\" at 100.67.155.101:12345/TCP\nI0617 01:03:35.140016       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:35.223839       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.88813ms\"\nI0617 01:03:35.224178       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:35.267983       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.098829ms\"\nI0617 01:03:35.578541       1 service.go:306] Service ephemeral-9020-469/csi-hostpathplugin updated: 1 ports\nI0617 01:03:35.872536       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-provisioner updated: 1 ports\nI0617 01:03:36.166032       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-resizer updated: 1 ports\nI0617 01:03:36.166084       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpathplugin:dummy\" at 100.65.179.174:12345/TCP\nI0617 01:03:36.166102       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpath-provisioner:dummy\" at 100.70.82.246:12345/TCP\nI0617 01:03:36.166113       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpath-resizer:dummy\" at 100.65.222.152:12345/TCP\nI0617 01:03:36.166187       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:36.201141       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.051467ms\"\nI0617 01:03:36.462129       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:03:37.177420       1 service.go:421] Adding new service port \"ephemeral-9020-469/csi-hostpath-snapshotter:dummy\" at 100.65.50.74:12345/TCP\nI0617 01:03:37.177527       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:37.223312       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.894784ms\"\nI0617 01:03:38.224259       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:38.258861       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.697121ms\"\nI0617 01:03:38.983264       1 service.go:306] Service services-3389/service-headless-toggled updated: 1 ports\nI0617 01:03:39.259541       1 service.go:421] Adding new service port \"services-3389/service-headless-toggled\" at 100.71.171.135:80/TCP\nI0617 01:03:39.259708       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:39.291852       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.331535ms\"\nI0617 01:03:40.121610       1 service.go:306] Service conntrack-2223/svc-udp updated: 1 ports\nI0617 01:03:40.292310       1 service.go:421] Adding new service port \"conntrack-2223/svc-udp:udp\" at 100.67.253.84:80/UDP\nI0617 01:03:40.292441       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:40.349826       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.533585ms\"\nI0617 01:03:47.564203       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:47.613622       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.4661ms\"\nI0617 01:03:49.425236       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:49.459358       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.191935ms\"\nI0617 01:03:49.713233       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-2223/svc-udp:udp\" clusterIP=\"100.67.253.84\"\nI0617 01:03:49.713273       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:49.753830       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.699687ms\"\nI0617 01:03:51.027547       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:51.067432       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.989144ms\"\nI0617 01:03:55.364216       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-attacher updated: 0 ports\nI0617 01:03:55.364262       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpath-attacher:dummy\"\nI0617 01:03:55.364351       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:55.404467       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.175556ms\"\nI0617 01:03:55.404578       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:55.462437       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.922202ms\"\nI0617 01:03:55.816072       1 service.go:306] Service volume-expand-7844-2168/csi-hostpathplugin updated: 0 ports\nI0617 01:03:56.121089       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-provisioner updated: 0 ports\nI0617 01:03:56.416932       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-resizer updated: 0 ports\nI0617 01:03:56.416974       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpath-resizer:dummy\"\nI0617 01:03:56.416990       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpathplugin:dummy\"\nI0617 01:03:56.416999       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpath-provisioner:dummy\"\nI0617 01:03:56.417085       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:56.513989       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"96.954107ms\"\nI0617 01:03:56.724901       1 service.go:306] Service volume-expand-7844-2168/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:03:57.515003       1 service.go:446] Removing service port \"volume-expand-7844-2168/csi-hostpath-snapshotter:dummy\"\nI0617 01:03:57.515152       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:57.547474       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.473008ms\"\nI0617 01:03:59.421771       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:03:59.460638       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.961183ms\"\nI0617 01:04:00.049091       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:00.102892       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.914975ms\"\nI0617 01:04:08.407848       1 service.go:306] Service dns-1609/test-service-2 updated: 1 ports\nI0617 01:04:08.407937       1 service.go:421] Adding new service port \"dns-1609/test-service-2:http\" at 100.70.254.202:80/TCP\nI0617 01:04:08.407998       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:08.478770       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.836821ms\"\nI0617 01:04:08.478884       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:08.528519       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.702908ms\"\nI0617 01:04:10.679172       1 service.go:306] Service services-3389/service-headless-toggled updated: 0 ports\nI0617 01:04:10.679213       1 service.go:446] Removing service port \"services-3389/service-headless-toggled\"\nI0617 01:04:10.679289       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:10.711880       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.656861ms\"\nI0617 01:04:10.712082       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:10.744930       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.006509ms\"\nI0617 01:04:16.427886       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:16.464184       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.357746ms\"\nI0617 01:04:16.558767       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:16.591275       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.550337ms\"\nI0617 01:04:16.592197       1 service.go:306] Service conntrack-2223/svc-udp updated: 0 ports\nI0617 01:04:17.592301       1 service.go:446] Removing service port \"conntrack-2223/svc-udp:udp\"\nI0617 01:04:17.592420       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:17.637739       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.455138ms\"\nI0617 01:04:21.167620       1 service.go:306] Service services-3389/service-headless-toggled updated: 1 ports\nI0617 01:04:21.167687       1 service.go:421] Adding new service port \"services-3389/service-headless-toggled\" at 100.71.171.135:80/TCP\nI0617 01:04:21.167792       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:21.206977       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.303558ms\"\nI0617 01:04:34.379508       1 service.go:306] Service ephemeral-7804-8814/csi-hostpath-attacher updated: 1 ports\nI0617 01:04:34.379552       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpath-attacher:dummy\" at 100.67.190.52:12345/TCP\nI0617 01:04:34.379631       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:34.425176       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.614601ms\"\nI0617 01:04:34.425288       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:34.486475       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.249566ms\"\nI0617 01:04:34.818133       1 service.go:306] Service ephemeral-7804-8814/csi-hostpathplugin updated: 1 ports\nI0617 01:04:35.117743       1 service.go:306] Service ephemeral-7804-8814/csi-hostpath-provisioner updated: 1 ports\nI0617 01:04:35.410796       1 service.go:306] Service ephemeral-7804-8814/csi-hostpath-resizer updated: 1 ports\nI0617 01:04:35.410836       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpathplugin:dummy\" at 100.64.107.87:12345/TCP\nI0617 01:04:35.410851       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpath-provisioner:dummy\" at 100.67.225.38:12345/TCP\nI0617 01:04:35.410860       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpath-resizer:dummy\" at 100.66.82.142:12345/TCP\nI0617 01:04:35.410939       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:35.458285       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.4432ms\"\nI0617 01:04:35.714146       1 service.go:306] Service ephemeral-7804-8814/csi-hostpath-snapshotter updated: 1 ports\nI0617 01:04:36.458721       1 service.go:421] Adding new service port \"ephemeral-7804-8814/csi-hostpath-snapshotter:dummy\" at 100.65.91.90:12345/TCP\nI0617 01:04:36.458832       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:36.517366       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.671981ms\"\nI0617 01:04:37.517633       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:37.551131       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.587307ms\"\nI0617 01:04:38.551987       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:38.585056       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.149218ms\"\nI0617 01:04:39.585535       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:39.631273       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.850301ms\"\nI0617 01:04:41.426251       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-attacher updated: 0 ports\nI0617 01:04:41.426313       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpath-attacher:dummy\"\nI0617 01:04:41.426409       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:41.476574       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.269473ms\"\nI0617 01:04:41.476689       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:41.519930       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.315008ms\"\nI0617 01:04:41.874956       1 service.go:306] Service ephemeral-9020-469/csi-hostpathplugin updated: 0 ports\nI0617 01:04:42.171225       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-provisioner updated: 0 ports\nI0617 01:04:42.473801       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-resizer updated: 0 ports\nI0617 01:04:42.473835       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpathplugin:dummy\"\nI0617 01:04:42.473848       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpath-provisioner:dummy\"\nI0617 01:04:42.473855       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpath-resizer:dummy\"\nI0617 01:04:42.473979       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:42.525314       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.449503ms\"\nI0617 01:04:42.770158       1 service.go:306] Service ephemeral-9020-469/csi-hostpath-snapshotter updated: 0 ports\nI0617 01:04:43.525612       1 service.go:446] Removing service port \"ephemeral-9020-469/csi-hostpath-snapshotter:dummy\"\nI0617 01:04:43.525754       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:43.562023       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.414426ms\"\nI0617 01:04:44.480329       1 service.go:306] Service dns-1609/test-service-2 updated: 0 ports\nI0617 01:04:44.480369       1 service.go:446] Removing service port \"dns-1609/test-service-2:http\"\nI0617 01:04:44.480474       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:44.515588       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.207425ms\"\nI0617 01:04:45.515849       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:45.551815       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.044825ms\"\nI0617 01:04:53.580651       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:53.617066       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.473924ms\"\nI0617 01:04:53.617259       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:53.647736       1 service.go:306] Service services-8645/sourceip-test updated: 1 ports\nI0617 01:04:53.660661       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.546543ms\"\nI0617 01:04:53.771371       1 service.go:306] Service services-3389/service-headless-toggled updated: 0 ports\nI0617 01:04:54.661156       1 service.go:421] Adding new service port \"services-8645/sourceip-test\" at 100.68.52.235:8080/TCP\nI0617 01:04:54.661192       1 service.go:446] Removing service port \"services-3389/service-headless-toggled\"\nI0617 01:04:54.661283       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:54.703656       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.495574ms\"\nI0617 01:04:59.590288       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:04:59.631977       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.735444ms\"\nI0617 01:05:06.673474       1 service.go:306] Service webhook-6103/e2e-test-webhook updated: 1 ports\nI0617 01:05:06.673523       1 service.go:421] Adding new service port \"webhook-6103/e2e-test-webhook\" at 100.66.10.109:8443/TCP\nI0617 01:05:06.673597       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:06.722003       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.473563ms\"\nI0617 01:05:06.722151       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:06.776519       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.470992ms\"\nI0617 01:05:11.225905       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:11.275990       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.131096ms\"\nI0617 01:05:11.383559       1 service.go:306] Service services-8645/sourceip-test updated: 0 ports\nI0617 01:05:11.383601       1 service.go:446] Removing service port \"services-8645/sourceip-test\"\nI0617 01:05:11.383677       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:11.432055       1 service.go:306] Service webhook-9727/e2e-test-webhook updated: 1 ports\nI0617 01:05:11.441355       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.724247ms\"\nI0617 01:05:11.901156       1 service.go:306] Service webhook-6103/e2e-test-webhook updated: 0 ports\nI0617 01:05:12.442384       1 service.go:421] Adding new service port \"webhook-9727/e2e-test-webhook\" at 100.65.56.36:8443/TCP\nI0617 01:05:12.442419       1 service.go:446] Removing service port \"webhook-6103/e2e-test-webhook\"\nI0617 01:05:12.442524       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:12.474335       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.957308ms\"\nI0617 01:05:13.259528       1 service.go:306] Service webhook-8819/e2e-test-webhook updated: 1 ports\nI0617 01:05:13.259573       1 service.go:421] Adding new service port \"webhook-8819/e2e-test-webhook\" at 100.71.33.2:8443/TCP\nI0617 01:05:13.259657       1 proxier.go:854] \"Syncing iptables rules\"\nI0617 01:05:13.295062       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.486177ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-60-41.sa-east-1.compute.internal ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-38-69.sa-east-1.compute.internal ====\nI0617 00:46:40.542837       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0617 00:46:40.543237       1 flags.go:59] FLAG: --address=\"0.0.0.0\"\nI0617 00:46:40.543250       1 flags.go:59] FLAG: --algorithm-provider=\"\"\nI0617 00:46:40.543254       1 flags.go:59] FLAG: --allow-metric-labels=\"[]\"\nI0617 00:46:40.543263       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0617 00:46:40.543267       1 flags.go:59] FLAG: --authentication-kubeconfig=\"\"\nI0617 00:46:40.543270       1 flags.go:59] FLAG: --authentication-skip-lookup=\"false\"\nI0617 00:46:40.543276       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0617 00:46:40.543281       1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure=\"true\"\nI0617 00:46:40.543285       1 flags.go:59] FLAG: --authorization-always-allow-paths=\"[/healthz,/readyz,/livez]\"\nI0617 00:46:40.543299       1 flags.go:59] FLAG: --authorization-kubeconfig=\"\"\nI0617 00:46:40.543302       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0617 00:46:40.543306       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0617 00:46:40.543311       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0617 00:46:40.543317       1 flags.go:59] FLAG: --cert-dir=\"\"\nI0617 00:46:40.543321       1 flags.go:59] FLAG: --client-ca-file=\"\"\nI0617 00:46:40.543324       1 flags.go:59] FLAG: --config=\"/var/lib/kube-scheduler/config.yaml\"\nI0617 00:46:40.543329       1 flags.go:59] FLAG: --contention-profiling=\"true\"\nI0617 00:46:40.543332       1 flags.go:59] FLAG: --disabled-metrics=\"[]\"\nI0617 00:46:40.543337       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0617 00:46:40.543345       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0617 00:46:40.543360       1 flags.go:59] FLAG: --hard-pod-affinity-symmetric-weight=\"1\"\nI0617 00:46:40.543365       1 flags.go:59] FLAG: --help=\"false\"\nI0617 00:46:40.543369       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0617 00:46:40.543374       1 flags.go:59] FLAG: --kube-api-burst=\"100\"\nI0617 00:46:40.543378       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0617 00:46:40.543383       1 flags.go:59] FLAG: --kube-api-qps=\"50\"\nI0617 00:46:40.543389       1 flags.go:59] FLAG: --kubeconfig=\"\"\nI0617 00:46:40.543392       1 flags.go:59] FLAG: --leader-elect=\"true\"\nI0617 00:46:40.543396       1 flags.go:59] FLAG: --leader-elect-lease-duration=\"15s\"\nI0617 00:46:40.543400       1 flags.go:59] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0617 00:46:40.543403       1 flags.go:59] FLAG: --leader-elect-resource-lock=\"leases\"\nI0617 00:46:40.543407       1 flags.go:59] FLAG: --leader-elect-resource-name=\"kube-scheduler\"\nI0617 00:46:40.543411       1 flags.go:59] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0617 00:46:40.543415       1 flags.go:59] FLAG: --leader-elect-retry-period=\"2s\"\nI0617 00:46:40.543419       1 flags.go:59] FLAG: --lock-object-name=\"kube-scheduler\"\nI0617 00:46:40.543422       1 flags.go:59] FLAG: --lock-object-namespace=\"kube-system\"\nI0617 00:46:40.543426       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0617 00:46:40.543439       1 flags.go:59] FLAG: --log-dir=\"\"\nI0617 00:46:40.543443       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-scheduler.log\"\nI0617 00:46:40.543447       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0617 00:46:40.543451       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0617 00:46:40.543455       1 flags.go:59] FLAG: --logging-format=\"text\"\nI0617 00:46:40.543458       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0617 00:46:40.543462       1 flags.go:59] FLAG: --master=\"\"\nI0617 00:46:40.543465       1 flags.go:59] FLAG: --one-output=\"false\"\nI0617 00:46:40.543469       1 flags.go:59] FLAG: --permit-address-sharing=\"false\"\nI0617 00:46:40.543472       1 flags.go:59] FLAG: --permit-port-sharing=\"false\"\nI0617 00:46:40.543476       1 flags.go:59] FLAG: --policy-config-file=\"\"\nI0617 00:46:40.543479       1 flags.go:59] FLAG: --policy-configmap=\"\"\nI0617 00:46:40.543483       1 flags.go:59] FLAG: --policy-configmap-namespace=\"kube-system\"\nI0617 00:46:40.543489       1 flags.go:59] FLAG: --port=\"10251\"\nI0617 00:46:40.543493       1 flags.go:59] FLAG: --profiling=\"true\"\nI0617 00:46:40.543497       1 flags.go:59] FLAG: --requestheader-allowed-names=\"[]\"\nI0617 00:46:40.543506       1 flags.go:59] FLAG: --requestheader-client-ca-file=\"\"\nI0617 00:46:40.543510       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0617 00:46:40.543515       1 flags.go:59] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0617 00:46:40.543526       1 flags.go:59] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0617 00:46:40.543530       1 flags.go:59] FLAG: --scheduler-name=\"default-scheduler\"\nI0617 00:46:40.543534       1 flags.go:59] FLAG: --secure-port=\"10259\"\nI0617 00:46:40.543538       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0617 00:46:40.543541       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0617 00:46:40.543545       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0617 00:46:40.543548       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0617 00:46:40.543552       1 flags.go:59] FLAG: --tls-cert-file=\"\"\nI0617 00:46:40.543555       1 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"\nI0617 00:46:40.543565       1 flags.go:59] FLAG: --tls-min-version=\"\"\nI0617 00:46:40.543568       1 flags.go:59] FLAG: --tls-private-key-file=\"\"\nI0617 00:46:40.543571       1 flags.go:59] FLAG: --tls-sni-cert-key=\"[]\"\nI0617 00:46:40.543576       1 flags.go:59] FLAG: --use-legacy-policy-config=\"false\"\nI0617 00:46:40.543580       1 flags.go:59] FLAG: --v=\"2\"\nI0617 00:46:40.543583       1 flags.go:59] FLAG: --version=\"false\"\nI0617 00:46:40.543589       1 flags.go:59] FLAG: --vmodule=\"\"\nI0617 00:46:40.543593       1 flags.go:59] FLAG: --write-config-to=\"\"\nI0617 00:46:41.144652       1 serving.go:347] Generated self-signed cert in-memory\nW0617 00:46:41.631637       1 authentication.go:308] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.\nW0617 00:46:41.631662       1 authentication.go:332] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.\nW0617 00:46:41.631677       1 authorization.go:184] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.\nI0617 00:46:51.651596       1 factory.go:194] \"Creating scheduler from algorithm provider\" algorithmProvider=\"DefaultProvider\"\nI0617 00:46:51.657470       1 configfile.go:72] Using component config:\napiVersion: kubescheduler.config.k8s.io/v1beta1\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 100\n  contentType: application/vnd.kubernetes.protobuf\n  kubeconfig: /var/lib/kube-scheduler/kubeconfig\n  qps: 50\nenableContentionProfiling: true\nenableProfiling: true\nhealthzBindAddress: 0.0.0.0:10251\nkind: KubeSchedulerConfiguration\nleaderElection:\n  leaderElect: true\n  leaseDuration: 15s\n  renewDeadline: 10s\n  resourceLock: leases\n  resourceName: kube-scheduler\n  resourceNamespace: kube-system\n  retryPeriod: 2s\nmetricsBindAddress: 0.0.0.0:10251\nparallelism: 16\npercentageOfNodesToScore: 0\npodInitialBackoffSeconds: 1\npodMaxBackoffSeconds: 10\nprofiles:\n- pluginConfig:\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: DefaultPreemptionArgs\n      minCandidateNodesAbsolute: 100\n      minCandidateNodesPercentage: 10\n    name: DefaultPreemption\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      hardPodAffinityWeight: 1\n      kind: InterPodAffinityArgs\n    name: InterPodAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeAffinityArgs\n    name: NodeAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesFitArgs\n    name: NodeResourcesFit\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesLeastAllocatedArgs\n      resources:\n      - name: cpu\n        weight: 1\n      - name: memory\n        weight: 1\n    name: NodeResourcesLeastAllocated\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      defaultingType: System\n      kind: PodTopologySpreadArgs\n    name: PodTopologySpread\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      bindTimeoutSeconds: 600\n      kind: VolumeBindingArgs\n    name: VolumeBinding\n  plugins:\n    bind:\n      enabled:\n      - name: DefaultBinder\n        weight: 0\n    filter:\n      enabled:\n      - name: NodeUnschedulable\n        weight: 0\n      - name: NodeName\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: NodeResourcesFit\n        weight: 0\n      - name: VolumeRestrictions\n        weight: 0\n      - name: EBSLimits\n        weight: 0\n      - name: GCEPDLimits\n        weight: 0\n      - name: NodeVolumeLimits\n        weight: 0\n      - name: AzureDiskLimits\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: VolumeZone\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n    permit: {}\n    postBind: {}\n    postFilter:\n      enabled:\n      - name: DefaultPreemption\n        weight: 0\n    preBind:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    preFilter:\n      enabled:\n      - name: NodeResourcesFit\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n    preScore:\n      enabled:\n      - name: InterPodAffinity\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n    queueSort:\n      enabled:\n      - name: PrioritySort\n        weight: 0\n    reserve:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n\nI0617 00:46:51.657494       1 server.go:138] Starting Kubernetes Scheduler version v1.21.2\nW0617 00:46:51.659593       1 authorization.go:47] Authorization is disabled\nW0617 00:46:51.659606       1 authentication.go:47] Authentication is disabled\nI0617 00:46:51.659622       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251\nI0617 00:46:51.660613       1 tlsconfig.go:200] loaded serving cert [\"Generated self signed cert\"]: \"localhost@1623890801\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1623890800\" (2021-06-16 23:46:40 +0000 UTC to 2022-06-16 23:46:40 +0000 UTC (now=2021-06-17 00:46:51.660600298 +0000 UTC))\nI0617 00:46:51.660756       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1623890801\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1623890801\" (2021-06-16 23:46:41 +0000 UTC to 2022-06-16 23:46:41 +0000 UTC (now=2021-06-17 00:46:51.660749516 +0000 UTC))\nI0617 00:46:51.660782       1 secure_serving.go:197] Serving securely on [::]:10259\nI0617 00:46:51.660797       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0617 00:47:11.568443       1 trace.go:205] Trace[2037218224]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.663) (total time: 19905ms):\nTrace[2037218224]: [19.905055081s] [19.905055081s] END\nE0617 00:47:11.568473       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.568724       1 trace.go:205] Trace[970706971]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.661) (total time: 19906ms):\nTrace[970706971]: [19.906794423s] [19.906794423s] END\nE0617 00:47:11.568746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.568914       1 trace.go:205] Trace[301449343]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.661) (total time: 19907ms):\nTrace[301449343]: [19.90721268s] [19.90721268s] END\nE0617 00:47:11.568978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.569152       1 trace.go:205] Trace[744358502]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.661) (total time: 19908ms):\nTrace[744358502]: [19.908088544s] [19.908088544s] END\nE0617 00:47:11.569215       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.569299       1 trace.go:205] Trace[212705942]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.661) (total time: 19907ms):\nTrace[212705942]: [19.907400521s] [19.907400521s] END\nE0617 00:47:11.569313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.569359       1 trace.go:205] Trace[1367490402]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.662) (total time: 19906ms):\nTrace[1367490402]: [19.906960006s] [19.906960006s] END\nE0617 00:47:11.569474       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.569515       1 trace.go:205] Trace[1547117256]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.662) (total time: 19907ms):\nTrace[1547117256]: [19.907126786s] [19.907126786s] END\nE0617 00:47:11.569578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.569681       1 trace.go:205] Trace[2143583283]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.662) (total time: 19906ms):\nTrace[2143583283]: [19.906808298s] [19.906808298s] END\nE0617 00:47:11.569696       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.569814       1 trace.go:205] Trace[716504396]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.662) (total time: 19907ms):\nTrace[716504396]: [19.907676354s] [19.907676354s] END\nE0617 00:47:11.569830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.569943       1 trace.go:205] Trace[1402222535]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.661) (total time: 19908ms):\nTrace[1402222535]: [19.90885414s] [19.90885414s] END\nE0617 00:47:11.569958       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.570063       1 trace.go:205] Trace[1051706129]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.663) (total time: 19906ms):\nTrace[1051706129]: [19.906962394s] [19.906962394s] END\nE0617 00:47:11.570119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.570068       1 trace.go:205] Trace[1838407032]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.662) (total time: 19907ms):\nTrace[1838407032]: [19.907511385s] [19.907511385s] END\nE0617 00:47:11.570198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0617 00:47:11.570210       1 trace.go:205] Trace[1732627926]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (17-Jun-2021 00:46:51.662) (total time: 19908ms):\nTrace[1732627926]: [19.908064049s] [19.908064049s] END\nE0617 00:47:11.570248       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nE0617 00:47:18.414098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE0617 00:47:18.414274       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope\nE0617 00:47:18.414420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nE0617 00:47:18.414530       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE0617 00:47:18.414650       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope\nE0617 00:47:18.414751       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE0617 00:47:18.414848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nE0617 00:47:18.414944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE0617 00:47:18.415066       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE0617 00:47:18.415165       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nE0617 00:47:18.415309       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE0617 00:47:18.415451       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nE0617 00:47:18.415547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nI0617 00:47:21.261830       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...\nI0617 00:47:21.271138       1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler\nI0617 00:47:24.629116       1 node_tree.go:65] Added node \"ip-172-20-38-69.sa-east-1.compute.internal\" in group \"sa-east-1:\\x00:sa-east-1a\" to NodeTree\nI0617 00:47:37.940889       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-h4s24\" node=\"ip-172-20-38-69.sa-east-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0617 00:47:38.025881       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/dns-controller-5f98b58844-cjrql\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:47:38.063698       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:47:38.084780       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:47:39.281942       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/dns-controller-5f98b58844-cjrql\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:47:39.282126       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:47:39.282803       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:03.232919       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/dns-controller-5f98b58844-cjrql\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:48:03.233089       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:03.233197       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:07.297927       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/dns-controller-5f98b58844-cjrql\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:48:07.298150       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:07.298282       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:29.012938       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:29.013744       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:29.027037       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-5f98b58844-cjrql\" node=\"ip-172-20-38-69.sa-east-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0617 00:48:29.042989       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-fltkc\" node=\"ip-172-20-38-69.sa-east-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0617 00:48:39.319474       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:39.319654       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0617 00:48:42.904887       1 node_tree.go:65] Added node \"ip-172-20-46-228.sa-east-1.compute.internal\" in group \"sa-east-1:\\x00:sa-east-1a\" to NodeTree\nI0617 00:48:42.931260       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-6nvnl\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=2 feasibleNodes=1\nI0617 00:48:49.324487       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0617 00:48:49.332800       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0617 00:48:53.447577       1 node_tree.go:65] Added node \"ip-172-20-48-221.sa-east-1.compute.internal\" in group \"sa-east-1:\\x00:sa-east-1a\" to NodeTree\nI0617 00:48:53.487675       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-b5hmw\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI0617 00:48:53.960424       1 node_tree.go:65] Added node \"ip-172-20-60-41.sa-east-1.compute.internal\" in group \"sa-east-1:\\x00:sa-east-1a\" to NodeTree\nI0617 00:48:53.984409       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-smz9r\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=4 feasibleNodes=1\nI0617 00:48:59.330861       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" err=\"0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0617 00:49:00.330945       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" err=\"0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0617 00:49:00.350654       1 node_tree.go:65] Added node \"ip-172-20-55-34.sa-east-1.compute.internal\" in group \"sa-east-1:\\x00:sa-east-1a\" to NodeTree\nI0617 00:49:00.376028       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kube-flannel-ds-9ff4g\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:49:09.340244       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-f45c4bf76-pcz96\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:49:10.340441       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-xchvv\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:49:19.083069       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-f45c4bf76-sng9x\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0617 00:51:59.830758       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-2160/client-containers-3d86181a-a504-4e01-90e1-8a13b0d177df\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:00.143254       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-5462/pfpod\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:00.171090       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-3839/test-rs-qtnjc\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:00.242446       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-7826/var-expansion-c7ff07d7-e4c3-4364-a9bc-edd67331a1e9\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:00.321285       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1149/ss2-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:00.671421       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-6590/dns-test-7986e049-0cb4-4396-9fb4-382dc9d1aa7b\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:00.813471       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-361/pod1\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:01.003822       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4746/hostexec-ip-172-20-46-228.sa-east-1.compute.internal-tqbw6\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:01.469489       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5480/pod-1bad920c-7687-40c0-b6aa-17707ce0d576\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:01.501655       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-9338/labelsupdatea05b93cb-31d5-4efc-918e-5f7abd59eb89\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:02.108659       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-1992/var-expansion-611df0fa-9bb2-4291-b449-b130b36add0f\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:02.507186       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8061/hostexec-ip-172-20-46-228.sa-east-1.compute.internal-m8sxw\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:02.537512       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"crd-webhook-8985/sample-crd-conversion-webhook-deployment-697cdbd8f4-llptq\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:03.262868       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-4231/downwardapi-volume-b96a4bc1-5ba5-4da9-80f4-678b36b48e5d\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:03.419022       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-8813/image-pull-testc90c906f-3b62-4de0-b89a-e0c9240b2917\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:04.150425       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6759/httpd\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:04.672146       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-3509/sample-webhook-deployment-78988fc6cd-rhlph\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:05.467038       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3992-1428/csi-hostpath-attacher-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:05.709512       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5117-840/csi-hostpath-attacher-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:05.929128       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3992-1428/csi-hostpathplugin-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:06.149032       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5117-840/csi-hostpathplugin-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:06.202527       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3992-1428/csi-hostpath-provisioner-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:06.436175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5117-840/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:06.497255       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3992-1428/csi-hostpath-resizer-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:06.738206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5117-840/csi-hostpath-resizer-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:06.787991       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3992-1428/csi-hostpath-snapshotter-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:07.037128       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5117-840/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:07.445877       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5808-6397/csi-mockplugin-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:07.734985       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5808-6397/csi-mockplugin-attacher-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:10.877155       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1149/ss2-1\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:10.899346       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-3839/test-rs-s5phz\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:11.174517       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-3839/test-rs-jzxrt\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:11.176119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-3839/test-rs-8zf7d\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:13.754299       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5887/aws-injector\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:16.783491       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-9461/agnhost-primary-jvn66\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:18.101342       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-800/inline-volume-s2zpw\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-s2zpw-my-volume\\\" not found.\"\nI0617 00:52:19.499517       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-8448/pod-handle-http-request\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:19.973340       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-361/pod2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:22.396702       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pvc-protection-7450/pvc-tester-lgqt2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:23.065194       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1436/aws-injector\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:23.087488       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-3515/client-containers-6feb4ba8-bf2f-4944-b554-d98ecf8536e2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:24.104412       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-8448/pod-with-poststart-exec-hook\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:25.174456       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-800-202/csi-hostpath-attacher-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:25.626746       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-800-202/csi-hostpathplugin-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:25.755646       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-3568/pod-7befc04c-914a-475c-b4ba-564e61d277b5\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:25.915605       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-800-202/csi-hostpath-provisioner-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:26.215892       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-800-202/csi-hostpath-resizer-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:26.373107       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-9278/busybox-c32ca464-64a0-42ad-8492-0d7f18b74270\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:26.525772       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-800-202/csi-hostpath-snapshotter-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:26.725820       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4362/hairpin\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:26.943568       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-800/inline-volume-tester-pk4k8\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-pk4k8-my-volume-0\\\" not found.\"\nI0617 00:52:27.045424       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-3518/test-webserver-49e5b297-4585-43d6-8b51-b0f079d615f5\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:27.125175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-51/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-gnp65\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:27.320398       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-4415/image-pull-testb63032d4-543f-43b4-a04a-b7956b2d889b\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:31.426314       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-8119/security-context-6359b8cf-649c-4e79-94b8-0392c4f35402\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:32.024335       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1161/ss-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:32.813924       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8061/pod-9d415c7a-88fe-4420-b483-13775916eef8\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:34.936173       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-7572/test-pod\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:35.617208       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-7450/pvc-tester-xndxp\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protectionqtw8m\\\" is being deleted.\"\nI0617 00:52:37.055007       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1201/hostexec-ip-172-20-48-221.sa-east-1.compute.internal-f7wrb\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:37.701375       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"hostpath-6123/pod-host-path-test\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:37.775541       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-1708/image-pull-test93e24128-c01f-4963-bf30-b9753624f86a\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:38.105544       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-jj7gf\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-jj7gf-my-volume\\\" not found.\"\nI0617 00:52:39.105982       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5887/aws-client\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:39.361483       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4746/pod-subpath-test-preprovisionedpv-htlj\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:41.301279       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1201/pod-ae3530a8-5b62-4046-b176-166511c9e991\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:41.448296       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7267/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-q98lk\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:42.944014       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-6714-2630/csi-hostpath-attacher-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:43.407458       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-6714-2630/csi-hostpathplugin-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:43.707449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-6714-2630/csi-hostpath-provisioner-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:43.935611       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8375-1648/csi-hostpath-attacher-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:43.986273       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1149/ss2-2\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:44.103829       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-6714-2630/csi-hostpath-resizer-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:44.338049       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8375-1648/csi-hostpathplugin-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:44.395526       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-6714-2630/csi-hostpath-snapshotter-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:44.623533       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8375-1648/csi-hostpath-provisioner-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:44.799918       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-fddxt-my-volume-0\\\" not found.\"\nI0617 00:52:44.920125       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8375-1648/csi-hostpath-resizer-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:45.215800       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8375-1648/csi-hostpath-snapshotter-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:46.327670       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-kd4wx\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.327983       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-t572c\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.328887       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-qlslk\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.331433       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-98kvv\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.356255       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-fpk6f\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.356675       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-wtb2z\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.368721       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-4q2gc\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.383646       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-59sr9\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.384463       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-7sjkt\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.391318       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-3844/simpletest.rc-lbm7z\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:46.466544       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:52:48.374448       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/service-proxy-disabled-b4dn8\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:48.385812       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/service-proxy-disabled-rnrbs\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:48.393056       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/service-proxy-disabled-cpc9s\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:48.468090       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:52:49.500394       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-2184/test-pod-07f6b687-d562-495b-9875-96950a907e2e\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:50.998298       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1436/aws-client\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:52.471867       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:52:53.993348       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7267/exec-volume-test-preprovisionedpv-5dgt\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:54.701107       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5043-6975/csi-hostpath-attacher-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:55.147457       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5043-6975/csi-hostpathplugin-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:55.431640       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5043-6975/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:55.481745       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9591/pod-ephm-test-projected-xx7j\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:55.729877       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5043-6975/csi-hostpath-resizer-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:56.032467       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5043-6975/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:57.758889       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1161/ss-1\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:58.010831       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/service-proxy-toggled-lbtvb\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:58.026556       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/service-proxy-toggled-qsm8z\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:58.034855       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/service-proxy-toggled-btrh8\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:52:58.226734       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7336/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-8g9zf\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:52:59.734199       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5117/pod-01ddcf75-3570-4d7b-8c8d-1861b4b8e675\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:00.477315       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:53:01.437640       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/verify-service-up-host-exec-pod\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:02.374015       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-8255/implicit-root-uid\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:03.601030       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1149/ss2-2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:03.872589       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/verify-service-up-exec-pod-n4bvz\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:04.640278       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7336/pod-50af6f2d-bd4f-473c-be6d-83c19d57d058\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:05.742253       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-6332/busybox-3d97c90a-115e-4d63-a3de-6464c24fd5da\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:05.773841       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3992/pod-subpath-test-dynamicpv-74lj\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:06.250929       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6733-7010/csi-hostpath-attacher-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:06.707463       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6733-7010/csi-hostpathplugin-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:07.011550       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6733-7010/csi-hostpath-provisioner-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:07.313821       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6733-7010/csi-hostpath-resizer-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:07.604909       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6733-7010/csi-hostpath-snapshotter-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:09.448478       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7336/pod-fb314aa2-33a8-4b31-968c-b22f0af254d3\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:09.941852       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/verify-service-down-host-exec-pod\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:10.108928       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1201/pod-d5e19225-61dd-4960-96db-b6a5c25bcb32\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:10.482782       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:53:12.503798       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-3746/busybox1\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:12.643147       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1149/ss2-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:13.356964       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-8462/sample-webhook-deployment-78988fc6cd-cqtnw\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:16.310166       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/verify-service-down-host-exec-pod\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:16.788616       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-8025/pod-ef3e96b4-4cf9-4fff-896b-57b330f0d3d0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:18.525061       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5043/pod-bc981062-47d6-42c1-9058-508d5e084e27\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:19.652175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4612/pod-subpath-test-inlinevolume-th5z\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:20.488922       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:53:21.031543       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-8777/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:53:21.158682       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-4920/termination-message-container5287eacf-e0d5-4200-8f48-1b102f2852c6\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:22.490201       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-8777/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:53:22.767122       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/verify-service-up-host-exec-pod\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:23.227554       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-9481/busybox-user-0-37dab240-18cd-49bc-ac46-bddcd1ea08b2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:24.168916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7468/pod-subpath-test-inlinevolume-lfzp\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:24.491308       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-8777/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:53:25.201090       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/verify-service-up-exec-pod-g5n5s\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:25.317646       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5117/pod-23433e5f-b5eb-4091-aa2d-f266bff06d35\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:26.880441       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1149/ss2-2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:26.907378       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1161/ss-2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:27.766262       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-8777/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:53:28.380986       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-7370/busybox-privileged-true-9ce390ec-c4a5-40fb-993d-c18ac63c499c\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:29.493466       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-8777/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:53:30.320582       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3406-7093/csi-mockplugin-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:30.493837       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:53:30.611447       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3406-7093/csi-mockplugin-attacher-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:31.482857       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7297/verify-service-down-host-exec-pod\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:31.493908       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-8777/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:53:31.844230       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6725-5711/csi-mockplugin-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:31.994561       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1237/pod-subpath-test-inlinevolume-9rcj\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:34.129021       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5043/pod-275d798c-a021-4aa0-af12-980191db6eaa\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:35.441606       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-2903/sample-webhook-deployment-78988fc6cd-tg6mc\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:35.688457       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8612/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-vfk9m\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:37.507810       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-969/projected-volume-1ddc9bc5-ee1a-46e5-a0ab-510abbf8e564\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:38.786300       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8101/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-thkr4\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:40.500185       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:53:41.854365       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-4971/pod-configmaps-0c273417-fc4e-48ee-b0cc-254a0eb07233\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:42.467285       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-800/inline-volume-tester-pk4k8\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:42.567487       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-5989/pod-logs-websocket-28e1c542-5105-46bf-8747-d12ae7f30fc5\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:42.693696       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-543-4862/csi-hostpath-attacher-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:43.161074       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-543-4862/csi-hostpathplugin-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:43.363654       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6795/external-provisioner-dh8x6\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:43.440998       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-543-4862/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:43.697559       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6495/downwardapi-volume-f426c8aa-ce8f-466f-b247-d7d7d26d94f3\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:43.748438       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-543-4862/csi-hostpath-resizer-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:44.043366       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-543-4862/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:44.067517       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1149/ss2-1\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:45.084120       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6733/pod-subpath-test-dynamicpv-47c8\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:46.467038       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-752/pod1\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:47.199091       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-543/pod-8b52385b-5c29-4e84-8461-5882d5b34f21\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:48.784627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4635/hostexec-ip-172-20-48-221.sa-east-1.compute.internal-z4mk9\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:50.505674       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0617 00:53:50.578744       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-71/external-provisioner-frrb2\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:51.614974       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-752/pod2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:54.792027       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8101/pod-subpath-test-preprovisionedpv-92r2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:54.995025       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8612/pod-subpath-test-preprovisionedpv-hbpb\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:56.351405       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5679/pod-subpath-test-dynamicpv-xk4n\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:57.055983       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8375/pod-subpath-test-dynamicpv-n5db\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:57.617212       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5503/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-ftpjr\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:57.939832       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6795/nfs-server\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:53:59.411820       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8297/netserver-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:59.555235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8297/netserver-1\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:59.699227       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8297/netserver-2\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:53:59.844322       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8297/netserver-3\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:00.137744       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-4402/concurrent-27064854-zbcxw\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:00.517405       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-6714/inline-volume-tester-fddxt\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:02.834082       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3165/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-qhdbt\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:04.709807       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3021/nfs-server\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:05.838419       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-2217/pod-secrets-7ee08aa0-a718-4faa-84d0-1ddbe5533f34\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:06.308750       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-71/pod-subpath-test-dynamicpv-2xld\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:08.176409       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4635/exec-volume-test-preprovisionedpv-4xx8\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:09.255876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3165/pod-subpath-test-preprovisionedpv-ks4m\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:09.479932       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5503/local-injector\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:13.849437       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7887/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-bslll\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:14.211058       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3406/pvc-volume-tester-mcj6p\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:15.557635       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6725/pvc-volume-tester-khkjq\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:17.254262       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-3690/pod-ready\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:17.315663       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7855-704/csi-mockplugin-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:17.600074       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7855-704/csi-mockplugin-attacher-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:18.028462       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-mhzb6\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.079014       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-44x9n\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.079329       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-nssxq\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.079546       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-v5jpd\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.095958       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-lllx6\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.096134       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-6brss\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.096370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-x26ws\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.121063       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-bmq5c\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.121765       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-rvnc7\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.122034       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-q952c\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.122232       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-6f6hh\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.123118       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-j5x7r\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.123501       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-9jw6l\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.123615       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-7f5g8\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.170803       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-lcwvw\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.176662       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-rlhfb\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.191211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-2922v\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.241378       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-sztwn\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.257802       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-szbvh\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.259292       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-7pg9t\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.259561       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-7x7jg\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.259725       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-c7fqb\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.259793       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-lgxhg\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.259859       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-rbkw4\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.259920       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-6h9s4\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.259984       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-78bjr\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.260028       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-szv2q\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.261062       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7887/pod-f8346ea0-ba56-4bc1-9477-aa9e6cf9841c\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:18.303527       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-mwnc7\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.344272       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-qsl94\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.366176       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-hhtp4\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.395724       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-j2fqq\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.411731       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3021/pvc-tester-qh2pz\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.427259       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-kkkhk\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.430168       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-3373/pod-configmaps-5ad987d6-401a-4b13-91da-55fdf641b917\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.471155       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-kcttx\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.516485       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-lbbn8\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.568406       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-2t9sr\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.621213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-tw96s\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.668044       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-4dt2j\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.718066       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-knk8m\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.769513       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-7f7s9\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:18.820127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-8865/cleanup40-44d4a9f9-41a0-4389-8c6a-70e09ea23f07-2475h\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:24.065186       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6795/nfs-injector\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:24.634048       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1149/ss2-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:25.562799       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3021/pvc-tester-s9cl9\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:27.284650       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8297/test-container-pod\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:32.847613       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5503/local-client\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:33.366066       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2399/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-n8jvm\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:33.784398       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1570/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-c5hcv\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:34.306776       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-2446/pod-init-73df3b7f-16a3-49db-83c7-0928c58e017f\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:34.881190       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3021/pvc-tester-t76gk\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:38.694653       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4258/hostexec-ip-172-20-48-221.sa-east-1.compute.internal-tsdfd\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:38.856207       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7887/pod-3b785687-28f3-4e80-82f1-e6a2724a25b5\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:39.249492       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5808/pvc-volume-tester-5dvfj\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:39.387427       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2004/exec-volume-test-preprovisionedpv-x5bh\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:39.918740       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8441/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-v9v42\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:39.934855       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-1211/rs-kxb2g\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:39.941460       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-1211/rs-2bbkm\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:39.944519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-1211/rs-m9fnp\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:40.325728       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4182-4345/csi-mockplugin-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:41.895107       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1161/ss-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:41.988408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"endpointslice-4973/pod1\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:42.133041       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"endpointslice-4973/pod2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:42.965545       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4258/pod-f67fae9c-cdc0-4b6d-baf5-ba3d2a87703b\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:43.910916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6795/nfs-client\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:47.793831       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1161/ss-1\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:48.329030       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1570/pod-7243b7a1-28ca-4c5d-af7d-aa518c2d49c7\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:50.886588       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-1062/pod-submit-remove-5651c7c5-7b73-4cdb-a3aa-90edc9604f0b\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:53.682703       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-5808/inline-volume-fcnk7\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:54.599092       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8441/pod-subpath-test-preprovisionedpv-vlvl\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:54.940242       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2399/pod-subpath-test-preprovisionedpv-rdgs\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:55.585113       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5128/pod-subpath-test-inlinevolume-mgc2\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:56.167777       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-8338/pod-0f13a2b5-2a86-460e-a046-0dfec7f82289\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:56.680085       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3575-8837/csi-hostpath-attacher-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:57.115991       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3575-8837/csi-hostpathplugin-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:57.382348       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-1211/rs-qbndg\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:57.407143       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3575-8837/csi-hostpath-provisioner-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:57.598217       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-143/pod-subpath-test-dynamicpv-q8z8\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:54:57.701491       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3575-8837/csi-hostpath-resizer-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:57.993087       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3575-8837/csi-hostpath-snapshotter-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:54:58.929742       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7885/aws-injector\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:00.744439       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4182/pvc-volume-tester-9rgj4\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:00.978584       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-1211/rs-bgvsd\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:02.210819       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-5013/dns-test-9cb292d0-03bf-4054-a655-4558765c8634\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:02.880459       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1161/ss-2\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:03.032554       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1570/pod-493e209a-ef87-4961-9157-663e888fb115\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:04.123384       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6170/downwardapi-volume-53d6ffb3-f464-4621-9b48-65cb1cbcb136\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:04.372602       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-4207/pfpod\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:04.724959       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-6166/sample-webhook-deployment-78988fc6cd-pk4dw\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:10.273291       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8278/hostexec-ip-172-20-48-221.sa-east-1.compute.internal-f48xt\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:10.828586       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-6118/pod-subpath-test-configmap-6gzn\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:11.987432       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-781-1675/csi-mockplugin-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:12.271271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-781-1675/csi-mockplugin-attacher-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:14.485507       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8116/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-l55v8\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:16.029601       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2119/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-khwfp\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:16.800699       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-1215/agnhost-primary-2n6dx\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:18.531113       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5790/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-b8hhv\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:21.457765       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-242-7148/csi-hostpath-attacher-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:21.644273       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-6403/pod-configmaps-74bc6aba-ea9d-409c-8d17-6528c01b4c36\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:21.913673       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-242-7148/csi-hostpathplugin-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:22.209072       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-242-7148/csi-hostpath-provisioner-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:22.580131       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-242-7148/csi-hostpath-resizer-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:22.875330       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-242-7148/csi-hostpath-snapshotter-0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:24.118298       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7855/pvc-volume-tester-6qkdf\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:24.585821       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-781/pvc-volume-tester-lbbfg\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:24.696909       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7885/aws-client\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:24.989304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8278/local-injector\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:25.914471       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-206/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-4shng\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:26.821369       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5790/pod-9408dbc4-6759-402b-8ce0-e6b0f0d75bf3\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:26.896639       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-3575/pod-subpath-test-dynamicpv-gmmq\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:55:27.383384       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-7979/security-context-589a20b5-cb85-4150-964f-d04bbbb0ef0c\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:27.778753       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8857/external-provisioner-5hw7q\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:28.574307       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3575/pod-subpath-test-dynamicpv-gmmq\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:30.341469       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-242/hostpath-injector\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:30.523339       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-477/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-hjfsc\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:31.292965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8076-6109/csi-mockplugin-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:31.602027       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8076-6109/csi-mockplugin-attacher-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:31.986723       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5249/external-provisioner-rtw6g\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:34.569000       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5249/nfs-server\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:35.065534       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-1523/downward-api-f5901917-3c8b-433d-930c-82c0fd142a68\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:35.396402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8857/pod-subpath-test-dynamicpv-nztv\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:35.531564       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5790/pod-29337d7c-0e65-490f-8893-d914abd2b7ad\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:37.770805       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8767/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-kn27m\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:38.039370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8076/pvc-volume-tester-mp6fx\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:39.364495       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-206/exec-volume-test-preprovisionedpv-cv82\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:39.512449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6629/netserver-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:39.582573       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2119/pod-subpath-test-preprovisionedpv-hkhh\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:39.660745       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6629/netserver-1\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:39.698389       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8116/pod-subpath-test-preprovisionedpv-8wv2\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:39.804133       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6629/netserver-2\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:39.950402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6629/netserver-3\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:40.720709       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8857/pod-subpath-test-dynamicpv-nztv\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:41.533192       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-1006/pod-configmaps-164152a9-5e4a-4ff0-b9e1-8b15b86d3b14\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:44.789119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8278/local-client\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:47.952588       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6646/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-lwtmd\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:52.406306       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5438/hostexec-ip-172-20-46-228.sa-east-1.compute.internal-g647g\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:53.563689       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-3292/pod-projected-configmaps-0ebaa2b9-cb3b-4173-bd2a-ce07eae74b21\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:53.716281       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5249/exec-volume-test-preprovisionedpv-6wgg\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:53.955341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-477/pod-subpath-test-preprovisionedpv-9257\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:53.971531       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4099/pod-subpath-test-inlinevolume-pbvl\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:54.053829       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-6646/pod-b0cded5a-95a0-4151-ab4e-9c8ba7018bca\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:55:54.376963       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8767/pod-subpath-test-preprovisionedpv-hn6t\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:55.591123       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-6646/pod-b0cded5a-95a0-4151-ab4e-9c8ba7018bca\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:55:55.849266       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-8561/pod-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:57.389610       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5551/hostexec-ip-172-20-48-221.sa-east-1.compute.internal-msxw8\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:57.591502       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-6646/pod-b0cded5a-95a0-4151-ab4e-9c8ba7018bca\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0617 00:55:57.635521       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5438/pod-8a00224e-aca0-42a5-9672-f44f322e50ad\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:55:58.109264       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-1585/sample-webhook-deployment-78988fc6cd-xtpcs\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:55:59.935046       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7377/hostexec-ip-172-20-48-221.sa-east-1.compute.internal-5gx4g\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:00.090102       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-4893/pod-c4df1b72-ae5b-471f-be32-541f5156f071\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:00.122468       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-2001/successful-jobs-history-limit-27064856-ggmc4\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:01.426687       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8679/httpd\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:03.602238       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-1143-9758/csi-hostpath-attacher-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:03.980903       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8767/pod-subpath-test-preprovisionedpv-hn6t\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:04.052782       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-1143-9758/csi-hostpathplugin-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:04.346133       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-1143-9758/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:04.645947       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-1143-9758/csi-hostpath-resizer-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:04.958421       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-1143-9758/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:05.554161       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6629/test-container-pod\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:05.698417       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6629/host-test-container-pod\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:06.030704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-4485/pod-projected-secrets-b93b7f96-b99d-44aa-bbbf-94061a16dac0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:06.209756       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-242/hostpath-client\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:06.814390       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-3047/liveness-10c7ed92-8eb8-4100-a30c-8de2db278ff2\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:08.431463       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7377/pod-subpath-test-preprovisionedpv-6qwr\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:08.477648       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-9731/alpine-nnp-false-c928167f-23f2-4f2b-9b59-02236a5f15ff\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:08.582035       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-2813/dns-test-9c32576b-11f0-4026-b97b-86b0b8b3a692\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:09.951622       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5551/pod-subpath-test-preprovisionedpv-rs5k\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:11.039870       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-5913/pod-update-activedeadlineseconds-2450a255-8f88-41e5-951e-f362aab66ed9\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:12.346667       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-9455/ss-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:14.236837       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4046/httpd\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:17.414607       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5551/pod-subpath-test-preprovisionedpv-rs5k\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:17.846263       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-9253/annotationupdatecad0bc3b-770a-4d69-8efe-81ede8f6028c\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:18.541935       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-60/pod-handle-http-request\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:18.612587       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-4319/security-context-1b0e418d-c863-4c77-96fa-cbea9d6d438c\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:18.665065       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-2570/pod-secrets-64f13ff9-6d7a-4325-b130-3767f24ba1c0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:19.564498       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5848/pod-2791df82-d88d-4780-bda4-e9f910207890\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:20.080995       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6626/hostexec-ip-172-20-55-34.sa-east-1.compute.internal-7b8zx\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:20.214404       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-2549/pod-configmaps-e3e3e4f8-6f2b-492c-b1b4-e97c3ef3a1a2\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:21.131529       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-60/pod-with-prestop-exec-hook\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:23.217166       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-9455/ss-1\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:23.509478       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-1842/downwardapi-volume-3d111f36-7197-4293-92ec-3bc41c36f172\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:23.603709       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-9342/logs-generator\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:23.947148       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-688/all-succeed-6xpn2\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:23.947656       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-688/all-succeed-5ddhw\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:24.359739       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-7351/pod-bb7114a6-e8a1-46d4-b380-f0340118fd48\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:24.488480       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6626/pod-40bae4e5-a825-4540-8033-1dbaa43b66b0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:25.396913       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-8215/termination-message-container533d2a43-15e6-4520-bf4d-9aedacca0ed4\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:25.741458       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-688/all-succeed-7mszl\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:26.936431       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-688/all-succeed-6fbsz\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:27.021795       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5901-6678/csi-hostpath-attacher-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:27.453241       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5901-6678/csi-hostpathplugin-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:27.749059       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5901-6678/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:28.038169       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5901-6678/csi-hostpath-resizer-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:28.235710       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-2285/explicit-nonroot-uid\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:28.351378       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5901-6678/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:28.536179       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"proxy-2698/proxy-service-pwnds-4qcln\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:28.721121       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8765/pod-subpath-test-inlinevolume-zq4w\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:29.105566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7189/ss-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:31.187447       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6626/pod-31222d59-8c7f-4045-abe2-18edc41fe15f\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:31.409878       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-2869/dns-test-65d5e03d-ebde-4b82-ac05-d51be8fa686e\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:31.528779       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5901/pod-subpath-test-dynamicpv-fx2h\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:33.398348       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-3002/downward-api-9b5e50ab-5654-493a-8af3-715db3e92564\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:33.612908       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5250/pod-handle-http-request\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:34.733340       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-privileged-pod-3174/privileged-pod\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:35.371680       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6987/pod-projected-secrets-9b8462d5-3731-4312-b62e-ccbc9516c7ee\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:36.211249       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5250/pod-with-poststart-http-hook\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:37.732427       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-382/pod-configmaps-d945232c-f15b-448f-b0f8-1a3517c7da4f\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:37.931002       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4378/netserver-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:38.074266       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4378/netserver-1\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:38.219640       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4378/netserver-2\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:38.343587       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8711/aws-injector\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:38.363475       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4378/netserver-3\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:38.394425       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-defaultsa\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:38.540374       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-mountsa\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:38.687603       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-nomountsa\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:38.842646       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-defaultsa-mountspec\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:38.987045       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-mountsa-mountspec\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:39.128981       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-nomountsa-mountspec\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:39.273272       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-defaultsa-nomountspec\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:39.421559       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-mountsa-nomountspec\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:39.561106       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-6711/pod-service-account-nomountsa-nomountspec\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:39.705286       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7475/nfs-server\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:39.913372       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5573/external-provisioner-bx5ps\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:40.821519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-5736/nfs-server\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:40.994326       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5124/hostexec-ip-172-20-48-221.sa-east-1.compute.internal-4ggxz\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:41.363240       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-5551/terminate-cmd-rpaac445b6d-9903-432f-8e6c-e80b30b09793\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:46.514266       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-8942/bin-false3bc0eb30-2781-4be1-8e59-365e9a392bf0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:46.731843       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3298/external-provisioner-dgcj6\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:49.345560       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5848/pod-eac9ea8c-e40a-4236-a0f5-499864b406d6\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:49.939908       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5354/hostpathsymlink-injector\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:56:51.190541       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7189/ss-1\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.413121       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-n4d6s\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.434975       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-9mxmw\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.435390       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-hm8xd\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.452087       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-zgx7l\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.452255       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-tzpxf\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.452403       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-xks6p\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.458417       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-2dcpr\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.467899       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-bgqdc\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.483282       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-j5qgn\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:52.484831       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-vj5xm\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:54.481615       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7475/pvc-tester-7nhz7\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:54.545786       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-4397/pod-init-98c6e57f-7534-44c9-bdfc-2c6c26511b62\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:56:54.901588       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5124/local-injector\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:00.187219       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-2001/successful-jobs-history-limit-27064857-lrjb2\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:00.210035       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-5168/simple-27064857-9d2nt\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:01.716516       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5354/hostpathsymlink-client\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:01.956495       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4378/test-container-pod\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:02.101634       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4378/host-test-container-pod\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:02.450867       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-2301/deployment-55649fd747-rb5dd\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:02.457489       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-2301/deployment-55649fd747-r2znb\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:02.457649       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-2301/deployment-55649fd747-qlljv\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:02.599060       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-2301/deployment-55649fd747-hg77x\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:02.607455       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-2301/deployment-55649fd747-nskwt\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:02.842989       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2802/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-7s64l\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:03.124876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-5551/terminate-cmd-rpof36a5e41d-83cc-455a-8544-0ebb59225522\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:03.223654       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5573/pod-subpath-test-dynamicpv-dzkk\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:03.443635       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-9172/rs-726r8\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:04.506550       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5252/pod-eab032d5-cc8e-4acf-b9e6-5c03d59e05d0\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:04.638142       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6081/pod-subpath-test-inlinevolume-cc9t\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:05.474922       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7189/ss-0\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:06.013748       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6620/simpletest.rc-888l9\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:06.021942       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6620/simpletest.rc-h8rfb\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:06.130509       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4330/netserver-0\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:06.277258       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4330/netserver-1\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:06.421671       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4330/netserver-2\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:06.568039       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4330/netserver-3\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:12.241442       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-5551/terminate-cmd-rpn8dfe4a24-3b55-4152-bbdd-061a968e8763\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:12.346185       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7199/ss2-0\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:12.649520       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8711/aws-client\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:13.346053       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-6117/security-context-af337721-cab4-4ddc-ad28-6c83833cebee\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:13.454228       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2356/pod1\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:13.603664       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2356/pod2\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:13.745958       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2356/pod3\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:16.430839       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-2785/sample-webhook-deployment-78988fc6cd-sn45l\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:16.952338       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7199/ss2-1\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:17.314674       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6039/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-94msj\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:17.862616       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9017/hostexec-ip-172-20-60-41.sa-east-1.compute.internal-mxx2z\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:18.305266       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3298/exec-volume-test-dynamicpv-2b8x\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:18.399116       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9592/hostexec-ip-172-20-46-228.sa-east-1.compute.internal-swprp\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:20.724378       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-2785/to-be-attached-pod\" node=\"ip-172-20-55-34.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:21.333705       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8391/hostexec-ip-172-20-48-221.sa-east-1.compute.internal-rmrfs\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:21.375527       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6126/test-new-deployment-847dcfb7fb-rzmf7\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:22.210479       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4322/pod-subpath-test-inlinevolume-7qdf\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:22.566819       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7199/ss2-2\" node=\"ip-172-20-46-228.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:23.789789       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-5736/pvc-tester-j6m46\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0617 00:57:23.841876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6039/pod-26bcdbcd-15d1-489b-bd55-8cb5aca39673\" node=\"ip-172-20-60-41.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:24.716191       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5124/local-client\" node=\"ip-172-20-48-221.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0617 00:57:24.779297       1 scheduler.go:604] \