This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-05-29 00:49
Elapsed33m8s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0529 00:50:03.992139    4028 up.go:43] Cleaning up any leaked resources from previous cluster
I0529 00:50:03.992174    4028 dumplogs.go:38] /logs/artifacts/a5594fc6-c017-11eb-b3db-1ecf15fc999e/kops toolbox dump --name e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I0529 00:50:04.008839    4049 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0529 00:50:04.008964    4049 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io" not found
W0529 00:50:04.531565    4028 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0529 00:50:04.531647    4028 down.go:48] /logs/artifacts/a5594fc6-c017-11eb-b3db-1ecf15fc999e/kops delete cluster --name e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --yes
I0529 00:50:04.553223    4059 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0529 00:50:04.553308    4059 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io" not found
I0529 00:50:05.043255    4028 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/05/29 00:50:05 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0529 00:50:05.051014    4028 http.go:37] curl https://ip.jsb.workers.dev
I0529 00:50:05.165777    4028 up.go:144] /logs/artifacts/a5594fc6-c017-11eb-b3db-1ecf15fc999e/kops create cluster --name e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.20.7 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=309956199498/RHEL-8.3_HVM-20210209-x86_64-0-Hourly2-GP2 --channel=alpha --networking=cilium --container-runtime=docker --admin-access 34.69.7.130/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-2a --master-size c5.large
I0529 00:50:05.183081    4068 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0529 00:50:05.183281    4068 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0529 00:50:05.230143    4068 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0529 00:50:05.818668    4068 new_cluster.go:1023]  Cloud Provider ID = aws
... skipping 42 lines ...

I0529 00:50:34.800296    4028 up.go:181] /logs/artifacts/a5594fc6-c017-11eb-b3db-1ecf15fc999e/kops validate cluster --name e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0529 00:50:34.814502    4088 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0529 00:50:34.814695    4088 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io

W0529 00:50:36.493084    4088 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:50:46.536147    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:50:56.583505    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:51:06.634507    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:51:16.666614    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:51:26.703626    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
W0529 00:51:36.752461    4088 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:51:46.782434    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:51:56.838368    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:52:06.865353    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:52:16.910440    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:52:26.944776    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:52:36.980325    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:52:47.026663    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:52:57.093554    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
W0529 00:53:07.123834    4088 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:53:17.154267    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:53:27.199166    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:53:37.244323    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:53:47.282754    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:53:57.329648    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:54:07.362000    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:54:17.394382    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:54:27.440564    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:54:37.471650    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:54:47.500642    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:54:57.526514    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:55:07.573135    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:55:17.607398    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
W0529 00:55:27.659119    4088 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:55:37.707389    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:55:47.740161    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:55:57.777546    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:56:07.823755    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0529 00:56:17.859137    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 9 lines ...
Machine	i-0e1599e52cb362162				machine "i-0e1599e52cb362162" has not yet joined cluster
Pod	kube-system/cilium-5c4c4			system-node-critical pod "cilium-5c4c4" is not ready (cilium-agent)
Pod	kube-system/cilium-hjh65			system-node-critical pod "cilium-hjh65" is pending
Pod	kube-system/coredns-8f5559c9b-pxvkg		system-cluster-critical pod "coredns-8f5559c9b-pxvkg" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t684	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t684" is pending

Validation Failed
W0529 00:56:31.762035    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 16 lines ...
Pod	kube-system/cilium-hjh65				system-node-critical pod "cilium-hjh65" is pending
Pod	kube-system/cilium-qh277				system-node-critical pod "cilium-qh277" is pending
Pod	kube-system/cilium-vhmhc				system-node-critical pod "cilium-vhmhc" is pending
Pod	kube-system/coredns-8f5559c9b-pxvkg			system-cluster-critical pod "coredns-8f5559c9b-pxvkg" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t684		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t684" is pending

Validation Failed
W0529 00:56:44.441027    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 15 lines ...
Pod	kube-system/cilium-hjh65				system-node-critical pod "cilium-hjh65" is not ready (cilium-agent)
Pod	kube-system/cilium-qh277				system-node-critical pod "cilium-qh277" is not ready (cilium-agent)
Pod	kube-system/cilium-vhmhc				system-node-critical pod "cilium-vhmhc" is not ready (cilium-agent)
Pod	kube-system/coredns-8f5559c9b-pxvkg			system-cluster-critical pod "coredns-8f5559c9b-pxvkg" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t684		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t684" is pending

Validation Failed
W0529 00:56:57.090093    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 10 lines ...
Pod	kube-system/cilium-5c4c4			system-node-critical pod "cilium-5c4c4" is not ready (cilium-agent)
Pod	kube-system/cilium-hjh65			system-node-critical pod "cilium-hjh65" is not ready (cilium-agent)
Pod	kube-system/cilium-vhmhc			system-node-critical pod "cilium-vhmhc" is not ready (cilium-agent)
Pod	kube-system/coredns-8f5559c9b-pxvkg		system-cluster-critical pod "coredns-8f5559c9b-pxvkg" is not ready (coredns)
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t684	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t684" is pending

Validation Failed
W0529 00:57:09.775451    4088 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 1141 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 474 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 00:59:47.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3371" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 00:59:47.695: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 00:59:48.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8850" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 00:59:48.596: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 379 lines ...
May 29 00:59:49.745: INFO: AfterEach: Cleaning up test resources.
May 29 00:59:49.745: INFO: pvc is nil
May 29 00:59:49.745: INFO: Deleting PersistentVolume "hostpath-qfbpm"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":1,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 00:59:49.931: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 243 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 00:59:50.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3072" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1090
    should check if kubectl describe prints relevant information for cronjob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1193
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 00:59:51.380: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 179 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 00:59:53.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7718" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":2,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 00:59:53.596: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 135 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
May 29 00:59:47.026: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9" in namespace "projected-8421" to be "Succeeded or Failed"
May 29 00:59:47.189: INFO: Pod "downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 162.270026ms
May 29 00:59:49.353: INFO: Pod "downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326709223s
May 29 00:59:51.515: INFO: Pod "downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488864383s
May 29 00:59:53.681: INFO: Pod "downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.654797484s
May 29 00:59:55.843: INFO: Pod "downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.816725971s
STEP: Saw pod success
May 29 00:59:55.843: INFO: Pod "downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9" satisfied condition "Succeeded or Failed"
May 29 00:59:56.005: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9 container client-container: <nil>
STEP: delete the pod
May 29 00:59:56.355: INFO: Waiting for pod downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9 to disappear
May 29 00:59:56.517: INFO: Pod downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.114 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 00:59:57.019: INFO: Driver local doesn't support ext4 -- skipping
... skipping 207 lines ...
• [SLOW TEST:13.601 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:516
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 00:59:59.577: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 61 lines ...
STEP: Destroying namespace "services-3197" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":2,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:00.312: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 98 lines ...
• [SLOW TEST:14.669 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 00:59:50.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 29 00:59:51.031: INFO: Waiting up to 5m0s for pod "pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb" in namespace "emptydir-5988" to be "Succeeded or Failed"
May 29 00:59:51.191: INFO: Pod "pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 159.976621ms
May 29 00:59:53.352: INFO: Pod "pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320288076s
May 29 00:59:55.511: INFO: Pod "pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480222136s
May 29 00:59:57.672: INFO: Pod "pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640997757s
May 29 00:59:59.833: INFO: Pod "pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.801415013s
STEP: Saw pod success
May 29 00:59:59.833: INFO: Pod "pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb" satisfied condition "Succeeded or Failed"
May 29 00:59:59.995: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb container test-container: <nil>
STEP: delete the pod
May 29 01:00:00.355: INFO: Waiting for pod pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb to disappear
May 29 01:00:00.515: INFO: Pod pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.779 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:00.861: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:00:01.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1727" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:01.731: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 74 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
May 29 00:59:49.281: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 29 00:59:49.440: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qsm6
STEP: Creating a pod to test subpath
May 29 00:59:49.613: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qsm6" in namespace "provisioning-7880" to be "Succeeded or Failed"
May 29 00:59:49.771: INFO: Pod "pod-subpath-test-inlinevolume-qsm6": Phase="Pending", Reason="", readiness=false. Elapsed: 157.968407ms
May 29 00:59:51.930: INFO: Pod "pod-subpath-test-inlinevolume-qsm6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316395309s
May 29 00:59:54.093: INFO: Pod "pod-subpath-test-inlinevolume-qsm6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479580155s
May 29 00:59:56.251: INFO: Pod "pod-subpath-test-inlinevolume-qsm6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637930548s
May 29 00:59:58.410: INFO: Pod "pod-subpath-test-inlinevolume-qsm6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796738544s
May 29 01:00:00.569: INFO: Pod "pod-subpath-test-inlinevolume-qsm6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.955547963s
STEP: Saw pod success
May 29 01:00:00.569: INFO: Pod "pod-subpath-test-inlinevolume-qsm6" satisfied condition "Succeeded or Failed"
May 29 01:00:00.727: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-qsm6 container test-container-volume-inlinevolume-qsm6: <nil>
STEP: delete the pod
May 29 01:00:01.054: INFO: Waiting for pod pod-subpath-test-inlinevolume-qsm6 to disappear
May 29 01:00:01.216: INFO: Pod pod-subpath-test-inlinevolume-qsm6 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qsm6
May 29 01:00:01.216: INFO: Deleting pod "pod-subpath-test-inlinevolume-qsm6" in namespace "provisioning-7880"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:01.864: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 90 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 00:59:51.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-5fd9a323-c396-4b97-a1ca-2cc696a60b0a
STEP: Creating a pod to test consume configMaps
May 29 00:59:52.569: INFO: Waiting up to 5m0s for pod "pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5" in namespace "configmap-7630" to be "Succeeded or Failed"
May 29 00:59:52.728: INFO: Pod "pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5": Phase="Pending", Reason="", readiness=false. Elapsed: 159.132842ms
May 29 00:59:54.887: INFO: Pod "pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317704975s
May 29 00:59:57.045: INFO: Pod "pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476327943s
May 29 00:59:59.204: INFO: Pod "pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635271147s
May 29 01:00:01.363: INFO: Pod "pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.794253513s
STEP: Saw pod success
May 29 01:00:01.363: INFO: Pod "pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5" satisfied condition "Succeeded or Failed"
May 29 01:00:01.522: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5 container agnhost-container: <nil>
STEP: delete the pod
May 29 01:00:01.865: INFO: Waiting for pod pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5 to disappear
May 29 01:00:02.024: INFO: Pod pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.889 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:02.367: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 77 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:988
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1007
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:02.389: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 67 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
May 29 01:00:01.324: INFO: Waiting up to 5m0s for pod "metadata-volume-034b259a-4cf2-4a7d-8f8a-01bda0b14cf2" in namespace "downward-api-1163" to be "Succeeded or Failed"
May 29 01:00:01.486: INFO: Pod "metadata-volume-034b259a-4cf2-4a7d-8f8a-01bda0b14cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 161.68627ms
May 29 01:00:03.649: INFO: Pod "metadata-volume-034b259a-4cf2-4a7d-8f8a-01bda0b14cf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.32424097s
STEP: Saw pod success
May 29 01:00:03.649: INFO: Pod "metadata-volume-034b259a-4cf2-4a7d-8f8a-01bda0b14cf2" satisfied condition "Succeeded or Failed"
May 29 01:00:03.813: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod metadata-volume-034b259a-4cf2-4a7d-8f8a-01bda0b14cf2 container client-container: <nil>
STEP: delete the pod
May 29 01:00:04.146: INFO: Waiting for pod metadata-volume-034b259a-4cf2-4a7d-8f8a-01bda0b14cf2 to disappear
May 29 01:00:04.307: INFO: Pod metadata-volume-034b259a-4cf2-4a7d-8f8a-01bda0b14cf2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:00:04.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1163" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":32,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 14 lines ...
May 29 00:59:57.074: INFO: Got stdout from 52.78.71.152:22: Hello from ec2-user@ip-172-20-58-248.ap-northeast-2.compute.internal
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
May 29 01:00:00.856: INFO: Got stdout from 13.124.122.160:22: stdout
May 29 01:00:00.856: INFO: Got stderr from 13.124.122.160:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing ec2-user@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:00:05.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-647" for this suite.


• [SLOW TEST:20.266 seconds]
[k8s.io] [sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:06.190: INFO: Only supported for providers [gce gke] (not aws)
... skipping 21 lines ...
May 29 01:00:01.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 29 01:00:02.878: INFO: Waiting up to 5m0s for pod "pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576" in namespace "emptydir-5770" to be "Succeeded or Failed"
May 29 01:00:03.037: INFO: Pod "pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576": Phase="Pending", Reason="", readiness=false. Elapsed: 159.325358ms
May 29 01:00:05.195: INFO: Pod "pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317693764s
May 29 01:00:07.362: INFO: Pod "pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.484427519s
STEP: Saw pod success
May 29 01:00:07.362: INFO: Pod "pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576" satisfied condition "Succeeded or Failed"
May 29 01:00:07.522: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576 container test-container: <nil>
STEP: delete the pod
May 29 01:00:07.846: INFO: Waiting for pod pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576 to disappear
May 29 01:00:08.004: INFO: Pod pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.402 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:08.343: INFO: Only supported for providers [gce gke] (not aws)
... skipping 28 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
May 29 00:59:54.465: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 29 00:59:54.625: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kfq8
STEP: Creating a pod to test subpath
May 29 00:59:54.787: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kfq8" in namespace "provisioning-642" to be "Succeeded or Failed"
May 29 00:59:54.949: INFO: Pod "pod-subpath-test-inlinevolume-kfq8": Phase="Pending", Reason="", readiness=false. Elapsed: 161.473214ms
May 29 00:59:57.108: INFO: Pod "pod-subpath-test-inlinevolume-kfq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320490511s
May 29 00:59:59.267: INFO: Pod "pod-subpath-test-inlinevolume-kfq8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479531678s
May 29 01:00:01.426: INFO: Pod "pod-subpath-test-inlinevolume-kfq8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.638592626s
May 29 01:00:03.585: INFO: Pod "pod-subpath-test-inlinevolume-kfq8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.797923453s
May 29 01:00:05.744: INFO: Pod "pod-subpath-test-inlinevolume-kfq8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.956709455s
May 29 01:00:07.903: INFO: Pod "pod-subpath-test-inlinevolume-kfq8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.115726185s
May 29 01:00:10.062: INFO: Pod "pod-subpath-test-inlinevolume-kfq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.274741638s
STEP: Saw pod success
May 29 01:00:10.062: INFO: Pod "pod-subpath-test-inlinevolume-kfq8" satisfied condition "Succeeded or Failed"
May 29 01:00:10.220: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-kfq8 container test-container-subpath-inlinevolume-kfq8: <nil>
STEP: delete the pod
May 29 01:00:10.552: INFO: Waiting for pod pod-subpath-test-inlinevolume-kfq8 to disappear
May 29 01:00:10.711: INFO: Pod pod-subpath-test-inlinevolume-kfq8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kfq8
May 29 01:00:10.711: INFO: Deleting pod "pod-subpath-test-inlinevolume-kfq8" in namespace "provisioning-642"
... skipping 49 lines ...
• [SLOW TEST:26.060 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:12.003: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
• [SLOW TEST:26.890 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:568
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:12.848: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
May 29 01:00:05.478: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 29 01:00:05.478: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-4b2b
STEP: Creating a pod to test subpath
May 29 01:00:05.642: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-4b2b" in namespace "provisioning-1140" to be "Succeeded or Failed"
May 29 01:00:05.804: INFO: Pod "pod-subpath-test-inlinevolume-4b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 161.446379ms
May 29 01:00:07.966: INFO: Pod "pod-subpath-test-inlinevolume-4b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323612472s
May 29 01:00:10.128: INFO: Pod "pod-subpath-test-inlinevolume-4b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485715209s
May 29 01:00:12.297: INFO: Pod "pod-subpath-test-inlinevolume-4b2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.654146145s
STEP: Saw pod success
May 29 01:00:12.297: INFO: Pod "pod-subpath-test-inlinevolume-4b2b" satisfied condition "Succeeded or Failed"
May 29 01:00:12.483: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-4b2b container test-container-subpath-inlinevolume-4b2b: <nil>
STEP: delete the pod
May 29 01:00:12.832: INFO: Waiting for pod pod-subpath-test-inlinevolume-4b2b to disappear
May 29 01:00:13.004: INFO: Pod pod-subpath-test-inlinevolume-4b2b no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-4b2b
May 29 01:00:13.004: INFO: Deleting pod "pod-subpath-test-inlinevolume-4b2b" in namespace "provisioning-1140"
... skipping 36 lines ...
• [SLOW TEST:6.753 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:15.129: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 3 lines ...
May 29 01:00:00.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
May 29 01:00:01.875: INFO: Waiting up to 5m0s for pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619" in namespace "downward-api-3267" to be "Succeeded or Failed"
May 29 01:00:02.035: INFO: Pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619": Phase="Pending", Reason="", readiness=false. Elapsed: 159.758712ms
May 29 01:00:04.195: INFO: Pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320078942s
May 29 01:00:06.355: INFO: Pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480296145s
May 29 01:00:08.515: INFO: Pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640582563s
May 29 01:00:10.675: INFO: Pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800322988s
May 29 01:00:12.838: INFO: Pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619": Phase="Pending", Reason="", readiness=false. Elapsed: 10.962858553s
May 29 01:00:15.013: INFO: Pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.13832057s
STEP: Saw pod success
May 29 01:00:15.013: INFO: Pod "downward-api-48c88e42-20d3-481b-b542-5150d2322619" satisfied condition "Succeeded or Failed"
May 29 01:00:15.173: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod downward-api-48c88e42-20d3-481b-b542-5150d2322619 container dapi-container: <nil>
STEP: delete the pod
May 29 01:00:15.500: INFO: Waiting for pod downward-api-48c88e42-20d3-481b-b542-5150d2322619 to disappear
May 29 01:00:15.660: INFO: Pod downward-api-48c88e42-20d3-481b-b542-5150d2322619 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:15.071 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":50,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 60 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
May 29 00:59:48.899: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 29 00:59:49.235: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-q26g
STEP: Creating a pod to test subpath
May 29 00:59:49.398: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-q26g" in namespace "provisioning-2507" to be "Succeeded or Failed"
May 29 00:59:49.557: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 159.555358ms
May 29 00:59:51.727: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328830415s
May 29 00:59:53.887: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489475514s
May 29 00:59:56.047: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649473202s
May 29 00:59:58.207: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80933113s
May 29 01:00:00.367: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.969345495s
... skipping 2 lines ...
May 29 01:00:06.863: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 17.465431897s
May 29 01:00:09.023: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 19.625632001s
May 29 01:00:11.185: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 21.786823795s
May 29 01:00:13.345: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Pending", Reason="", readiness=false. Elapsed: 23.947351472s
May 29 01:00:15.507: INFO: Pod "pod-subpath-test-inlinevolume-q26g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.109228249s
STEP: Saw pod success
May 29 01:00:15.507: INFO: Pod "pod-subpath-test-inlinevolume-q26g" satisfied condition "Succeeded or Failed"
May 29 01:00:15.667: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-q26g container test-container-subpath-inlinevolume-q26g: <nil>
STEP: delete the pod
May 29 01:00:15.998: INFO: Waiting for pod pod-subpath-test-inlinevolume-q26g to disappear
May 29 01:00:16.157: INFO: Pod pod-subpath-test-inlinevolume-q26g no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-q26g
May 29 01:00:16.157: INFO: Deleting pod "pod-subpath-test-inlinevolume-q26g" in namespace "provisioning-2507"
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-configmap-wszn
STEP: Creating a pod to test atomic-volume-subpath
May 29 00:59:49.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wszn" in namespace "subpath-7648" to be "Succeeded or Failed"
May 29 00:59:49.757: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Pending", Reason="", readiness=false. Elapsed: 182.668846ms
May 29 00:59:51.917: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.343221568s
May 29 00:59:54.077: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503468127s
May 29 00:59:56.238: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.663987876s
May 29 00:59:58.399: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Running", Reason="", readiness=true. Elapsed: 8.824848612s
May 29 01:00:00.559: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Running", Reason="", readiness=true. Elapsed: 10.985562991s
... skipping 3 lines ...
May 29 01:00:09.208: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Running", Reason="", readiness=true. Elapsed: 19.634077746s
May 29 01:00:11.368: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Running", Reason="", readiness=true. Elapsed: 21.794531648s
May 29 01:00:13.532: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Running", Reason="", readiness=true. Elapsed: 23.95835165s
May 29 01:00:15.693: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Running", Reason="", readiness=true. Elapsed: 26.118665165s
May 29 01:00:17.854: INFO: Pod "pod-subpath-test-configmap-wszn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.280565561s
STEP: Saw pod success
May 29 01:00:17.855: INFO: Pod "pod-subpath-test-configmap-wszn" satisfied condition "Succeeded or Failed"
May 29 01:00:18.015: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-subpath-test-configmap-wszn container test-container-subpath-configmap-wszn: <nil>
STEP: delete the pod
May 29 01:00:18.375: INFO: Waiting for pod pod-subpath-test-configmap-wszn to disappear
May 29 01:00:18.539: INFO: Pod pod-subpath-test-configmap-wszn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-wszn
May 29 01:00:18.539: INFO: Deleting pod "pod-subpath-test-configmap-wszn" in namespace "subpath-7648"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:31.580 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:20.285: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:81
May 29 01:00:22.043: INFO: Driver "nfs" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 41 lines ...
May 29 01:00:05.403: INFO: PersistentVolumeClaim pvc-4zksv found but phase is Pending instead of Bound.
May 29 01:00:07.568: INFO: PersistentVolumeClaim pvc-4zksv found and phase=Bound (8.817394163s)
May 29 01:00:07.568: INFO: Waiting up to 3m0s for PersistentVolume local-8lc26 to have phase Bound
May 29 01:00:07.730: INFO: PersistentVolume local-8lc26 found and phase=Bound (162.530652ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-knb5
STEP: Creating a pod to test subpath
May 29 01:00:08.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-knb5" in namespace "provisioning-2931" to be "Succeeded or Failed"
May 29 01:00:08.383: INFO: Pod "pod-subpath-test-preprovisionedpv-knb5": Phase="Pending", Reason="", readiness=false. Elapsed: 163.024607ms
May 29 01:00:10.546: INFO: Pod "pod-subpath-test-preprovisionedpv-knb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326797743s
May 29 01:00:12.710: INFO: Pod "pod-subpath-test-preprovisionedpv-knb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489852313s
May 29 01:00:14.880: INFO: Pod "pod-subpath-test-preprovisionedpv-knb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.660063344s
May 29 01:00:17.043: INFO: Pod "pod-subpath-test-preprovisionedpv-knb5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.823294268s
May 29 01:00:19.208: INFO: Pod "pod-subpath-test-preprovisionedpv-knb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.988769207s
STEP: Saw pod success
May 29 01:00:19.209: INFO: Pod "pod-subpath-test-preprovisionedpv-knb5" satisfied condition "Succeeded or Failed"
May 29 01:00:19.371: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-knb5 container test-container-subpath-preprovisionedpv-knb5: <nil>
STEP: delete the pod
May 29 01:00:19.721: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-knb5 to disappear
May 29 01:00:19.890: INFO: Pod pod-subpath-test-preprovisionedpv-knb5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-knb5
May 29 01:00:19.890: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-knb5" in namespace "provisioning-2931"
... skipping 62 lines ...
• [SLOW TEST:21.925 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:23.720: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 56 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:16.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in volume subpath
May 29 01:00:17.280: INFO: Waiting up to 5m0s for pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2" in namespace "var-expansion-4373" to be "Succeeded or Failed"
May 29 01:00:17.437: INFO: Pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2": Phase="Pending", Reason="", readiness=false. Elapsed: 157.178836ms
May 29 01:00:19.594: INFO: Pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314232586s
May 29 01:00:21.751: INFO: Pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471198578s
May 29 01:00:23.908: INFO: Pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.628124079s
May 29 01:00:26.065: INFO: Pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.784930797s
May 29 01:00:28.222: INFO: Pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.942434643s
May 29 01:00:30.379: INFO: Pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.099463847s
STEP: Saw pod success
May 29 01:00:30.379: INFO: Pod "var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2" satisfied condition "Succeeded or Failed"
May 29 01:00:30.536: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2 container dapi-container: <nil>
STEP: delete the pod
May 29 01:00:30.868: INFO: Waiting for pod var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2 to disappear
May 29 01:00:31.025: INFO: Pod var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:15.007 seconds]
[k8s.io] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":21,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:22.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:171
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
May 29 01:00:23.949: INFO: Waiting up to 5m0s for pod "security-context-6fda962d-85e4-42ca-be3b-edefb46e0037" in namespace "security-context-5289" to be "Succeeded or Failed"
May 29 01:00:24.112: INFO: Pod "security-context-6fda962d-85e4-42ca-be3b-edefb46e0037": Phase="Pending", Reason="", readiness=false. Elapsed: 163.090531ms
May 29 01:00:26.275: INFO: Pod "security-context-6fda962d-85e4-42ca-be3b-edefb46e0037": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326008876s
May 29 01:00:28.438: INFO: Pod "security-context-6fda962d-85e4-42ca-be3b-edefb46e0037": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488955792s
May 29 01:00:30.603: INFO: Pod "security-context-6fda962d-85e4-42ca-be3b-edefb46e0037": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.654286219s
STEP: Saw pod success
May 29 01:00:30.603: INFO: Pod "security-context-6fda962d-85e4-42ca-be3b-edefb46e0037" satisfied condition "Succeeded or Failed"
May 29 01:00:30.766: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod security-context-6fda962d-85e4-42ca-be3b-edefb46e0037 container test-container: <nil>
STEP: delete the pod
May 29 01:00:31.099: INFO: Waiting for pod security-context-6fda962d-85e4-42ca-be3b-edefb46e0037 to disappear
May 29 01:00:31.262: INFO: Pod security-context-6fda962d-85e4-42ca-be3b-edefb46e0037 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.621 seconds]
[k8s.io] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:171
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":2,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:38.248: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 98 lines ...
May 29 01:00:20.417: INFO: PersistentVolumeClaim pvc-fxf75 found but phase is Pending instead of Bound.
May 29 01:00:22.576: INFO: PersistentVolumeClaim pvc-fxf75 found and phase=Bound (15.316126273s)
May 29 01:00:22.576: INFO: Waiting up to 3m0s for PersistentVolume local-s22xx to have phase Bound
May 29 01:00:22.735: INFO: PersistentVolume local-s22xx found and phase=Bound (158.933033ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jvrz
STEP: Creating a pod to test subpath
May 29 01:00:23.214: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jvrz" in namespace "provisioning-8745" to be "Succeeded or Failed"
May 29 01:00:23.373: INFO: Pod "pod-subpath-test-preprovisionedpv-jvrz": Phase="Pending", Reason="", readiness=false. Elapsed: 158.858359ms
May 29 01:00:25.532: INFO: Pod "pod-subpath-test-preprovisionedpv-jvrz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318025824s
May 29 01:00:27.691: INFO: Pod "pod-subpath-test-preprovisionedpv-jvrz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47691877s
May 29 01:00:29.851: INFO: Pod "pod-subpath-test-preprovisionedpv-jvrz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636976201s
May 29 01:00:32.010: INFO: Pod "pod-subpath-test-preprovisionedpv-jvrz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795975257s
May 29 01:00:34.170: INFO: Pod "pod-subpath-test-preprovisionedpv-jvrz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.955216651s
May 29 01:00:36.329: INFO: Pod "pod-subpath-test-preprovisionedpv-jvrz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.114527792s
STEP: Saw pod success
May 29 01:00:36.329: INFO: Pod "pod-subpath-test-preprovisionedpv-jvrz" satisfied condition "Succeeded or Failed"
May 29 01:00:36.487: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-jvrz container test-container-volume-preprovisionedpv-jvrz: <nil>
STEP: delete the pod
May 29 01:00:36.821: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jvrz to disappear
May 29 01:00:36.980: INFO: Pod pod-subpath-test-preprovisionedpv-jvrz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jvrz
May 29 01:00:36.980: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jvrz" in namespace "provisioning-8745"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:39.215: INFO: Only supported for providers [vsphere] (not aws)
... skipping 37 lines ...
May 29 01:00:11.236: INFO: Waiting up to 5m0s for PersistentVolumeClaims [nfsnrw2n] to have phase Bound
May 29 01:00:11.393: INFO: PersistentVolumeClaim nfsnrw2n found but phase is Pending instead of Bound.
May 29 01:00:13.552: INFO: PersistentVolumeClaim nfsnrw2n found but phase is Pending instead of Bound.
May 29 01:00:15.709: INFO: PersistentVolumeClaim nfsnrw2n found and phase=Bound (4.473350872s)
STEP: Creating pod pod-subpath-test-dynamicpv-tx7d
STEP: Creating a pod to test subpath
May 29 01:00:16.187: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tx7d" in namespace "provisioning-6606" to be "Succeeded or Failed"
May 29 01:00:16.345: INFO: Pod "pod-subpath-test-dynamicpv-tx7d": Phase="Pending", Reason="", readiness=false. Elapsed: 157.389425ms
May 29 01:00:18.503: INFO: Pod "pod-subpath-test-dynamicpv-tx7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31523297s
May 29 01:00:20.675: INFO: Pod "pod-subpath-test-dynamicpv-tx7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487597805s
May 29 01:00:22.837: INFO: Pod "pod-subpath-test-dynamicpv-tx7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649978488s
May 29 01:00:24.995: INFO: Pod "pod-subpath-test-dynamicpv-tx7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.807604655s
May 29 01:00:27.157: INFO: Pod "pod-subpath-test-dynamicpv-tx7d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.969205558s
May 29 01:00:29.314: INFO: Pod "pod-subpath-test-dynamicpv-tx7d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.126964734s
May 29 01:00:31.472: INFO: Pod "pod-subpath-test-dynamicpv-tx7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.28459885s
STEP: Saw pod success
May 29 01:00:31.472: INFO: Pod "pod-subpath-test-dynamicpv-tx7d" satisfied condition "Succeeded or Failed"
May 29 01:00:31.629: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-tx7d container test-container-subpath-dynamicpv-tx7d: <nil>
STEP: delete the pod
May 29 01:00:31.958: INFO: Waiting for pod pod-subpath-test-dynamicpv-tx7d to disappear
May 29 01:00:32.115: INFO: Pod pod-subpath-test-dynamicpv-tx7d no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tx7d
May 29 01:00:32.116: INFO: Deleting pod "pod-subpath-test-dynamicpv-tx7d" in namespace "provisioning-6606"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:39.891: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 53 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":3,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:31.351: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
May 29 01:00:32.137: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 29 01:00:32.137: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fjhm
STEP: Creating a pod to test subpath
May 29 01:00:32.295: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fjhm" in namespace "provisioning-785" to be "Succeeded or Failed"
May 29 01:00:32.452: INFO: Pod "pod-subpath-test-inlinevolume-fjhm": Phase="Pending", Reason="", readiness=false. Elapsed: 156.635272ms
May 29 01:00:34.614: INFO: Pod "pod-subpath-test-inlinevolume-fjhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318064025s
May 29 01:00:36.770: INFO: Pod "pod-subpath-test-inlinevolume-fjhm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474889868s
May 29 01:00:38.928: INFO: Pod "pod-subpath-test-inlinevolume-fjhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.631953178s
STEP: Saw pod success
May 29 01:00:38.928: INFO: Pod "pod-subpath-test-inlinevolume-fjhm" satisfied condition "Succeeded or Failed"
May 29 01:00:39.084: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-fjhm container test-container-subpath-inlinevolume-fjhm: <nil>
STEP: delete the pod
May 29 01:00:39.414: INFO: Waiting for pod pod-subpath-test-inlinevolume-fjhm to disappear
May 29 01:00:39.570: INFO: Pod pod-subpath-test-inlinevolume-fjhm no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fjhm
May 29 01:00:39.571: INFO: Deleting pod "pod-subpath-test-inlinevolume-fjhm" in namespace "provisioning-785"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:40.232: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 44 lines ...
May 29 01:00:19.797: INFO: PersistentVolumeClaim pvc-p4mz4 found but phase is Pending instead of Bound.
May 29 01:00:21.958: INFO: PersistentVolumeClaim pvc-p4mz4 found and phase=Bound (15.362412953s)
May 29 01:00:21.958: INFO: Waiting up to 3m0s for PersistentVolume local-45lsc to have phase Bound
May 29 01:00:22.119: INFO: PersistentVolume local-45lsc found and phase=Bound (161.177919ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kpht
STEP: Creating a pod to test subpath
May 29 01:00:22.609: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kpht" in namespace "provisioning-1831" to be "Succeeded or Failed"
May 29 01:00:22.770: INFO: Pod "pod-subpath-test-preprovisionedpv-kpht": Phase="Pending", Reason="", readiness=false. Elapsed: 161.088061ms
May 29 01:00:24.932: INFO: Pod "pod-subpath-test-preprovisionedpv-kpht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322534178s
May 29 01:00:27.106: INFO: Pod "pod-subpath-test-preprovisionedpv-kpht": Phase="Pending", Reason="", readiness=false. Elapsed: 4.49724686s
May 29 01:00:29.268: INFO: Pod "pod-subpath-test-preprovisionedpv-kpht": Phase="Pending", Reason="", readiness=false. Elapsed: 6.658820082s
May 29 01:00:31.430: INFO: Pod "pod-subpath-test-preprovisionedpv-kpht": Phase="Pending", Reason="", readiness=false. Elapsed: 8.820394773s
May 29 01:00:33.591: INFO: Pod "pod-subpath-test-preprovisionedpv-kpht": Phase="Pending", Reason="", readiness=false. Elapsed: 10.981910538s
May 29 01:00:35.756: INFO: Pod "pod-subpath-test-preprovisionedpv-kpht": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.146896272s
STEP: Saw pod success
May 29 01:00:35.756: INFO: Pod "pod-subpath-test-preprovisionedpv-kpht" satisfied condition "Succeeded or Failed"
May 29 01:00:35.919: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-kpht container test-container-volume-preprovisionedpv-kpht: <nil>
STEP: delete the pod
May 29 01:00:36.257: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kpht to disappear
May 29 01:00:36.425: INFO: Pod pod-subpath-test-preprovisionedpv-kpht no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kpht
May 29 01:00:36.425: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kpht" in namespace "provisioning-1831"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:11.373: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
May 29 01:00:20.632: INFO: PersistentVolumeClaim pvc-bmszg found but phase is Pending instead of Bound.
May 29 01:00:22.790: INFO: PersistentVolumeClaim pvc-bmszg found and phase=Bound (4.492001992s)
May 29 01:00:22.790: INFO: Waiting up to 3m0s for PersistentVolume local-2g6dq to have phase Bound
May 29 01:00:22.951: INFO: PersistentVolume local-2g6dq found and phase=Bound (160.067578ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-gqpj
STEP: Creating a pod to test exec-volume-test
May 29 01:00:23.429: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-gqpj" in namespace "volume-8629" to be "Succeeded or Failed"
May 29 01:00:23.588: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 158.226363ms
May 29 01:00:25.746: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316641908s
May 29 01:00:27.904: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475122837s
May 29 01:00:30.063: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633995698s
May 29 01:00:32.222: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.792489068s
May 29 01:00:34.381: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.951295446s
May 29 01:00:36.539: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.109882603s
May 29 01:00:38.698: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.268440296s
STEP: Saw pod success
May 29 01:00:38.698: INFO: Pod "exec-volume-test-preprovisionedpv-gqpj" satisfied condition "Succeeded or Failed"
May 29 01:00:38.856: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-gqpj container exec-container-preprovisionedpv-gqpj: <nil>
STEP: delete the pod
May 29 01:00:39.182: INFO: Waiting for pod exec-volume-test-preprovisionedpv-gqpj to disappear
May 29 01:00:39.347: INFO: Pod exec-volume-test-preprovisionedpv-gqpj no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-gqpj
May 29 01:00:39.347: INFO: Deleting pod "exec-volume-test-preprovisionedpv-gqpj" in namespace "volume-8629"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:41.724: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 82 lines ...
May 29 01:00:02.462: INFO: PersistentVolumeClaim pvc-qb7vp found and phase=Bound (159.4297ms)
May 29 01:00:02.462: INFO: Waiting up to 3m0s for PersistentVolume nfs-t8qr9 to have phase Bound
May 29 01:00:02.636: INFO: PersistentVolume nfs-t8qr9 found and phase=Bound (174.241932ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
STEP: Writing to the volume.
May 29 01:00:03.118: INFO: Waiting up to 5m0s for pod "pvc-tester-5d2j9" in namespace "pv-3796" to be "Succeeded or Failed"
May 29 01:00:03.293: INFO: Pod "pvc-tester-5d2j9": Phase="Pending", Reason="", readiness=false. Elapsed: 174.998714ms
May 29 01:00:05.452: INFO: Pod "pvc-tester-5d2j9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334527928s
May 29 01:00:07.612: INFO: Pod "pvc-tester-5d2j9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.493716284s
STEP: Saw pod success
May 29 01:00:07.612: INFO: Pod "pvc-tester-5d2j9" satisfied condition "Succeeded or Failed"
STEP: Deleting the claim
May 29 01:00:07.612: INFO: Deleting pod "pvc-tester-5d2j9" in namespace "pv-3796"
May 29 01:00:07.775: INFO: Wait up to 5m0s for pod "pvc-tester-5d2j9" to be fully deleted
May 29 01:00:07.934: INFO: Deleting PVC pvc-qb7vp to trigger reclamation of PV 
May 29 01:00:07.934: INFO: Deleting PersistentVolumeClaim "pvc-qb7vp"
May 29 01:00:08.093: INFO: Waiting for reclaim process to complete.
... skipping 4 lines ...
May 29 01:00:14.736: INFO: PersistentVolume nfs-t8qr9 found and phase=Available (6.643142283s)
May 29 01:00:14.895: INFO: PV nfs-t8qr9 now in "Available" phase
STEP: Re-mounting the volume.
May 29 01:00:15.054: INFO: Waiting up to 1m0s for PersistentVolumeClaims [pvc-89d8t] to have phase Bound
May 29 01:00:15.214: INFO: PersistentVolumeClaim pvc-89d8t found and phase=Bound (159.121982ms)
STEP: Verifying the mount has been cleaned.
May 29 01:00:15.373: INFO: Waiting up to 5m0s for pod "pvc-tester-ng9vj" in namespace "pv-3796" to be "Succeeded or Failed"
May 29 01:00:15.532: INFO: Pod "pvc-tester-ng9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 158.401968ms
May 29 01:00:17.691: INFO: Pod "pvc-tester-ng9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317459666s
May 29 01:00:19.850: INFO: Pod "pvc-tester-ng9vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.476512861s
STEP: Saw pod success
May 29 01:00:19.850: INFO: Pod "pvc-tester-ng9vj" satisfied condition "Succeeded or Failed"
May 29 01:00:19.850: INFO: Deleting pod "pvc-tester-ng9vj" in namespace "pv-3796"
May 29 01:00:20.017: INFO: Wait up to 5m0s for pod "pvc-tester-ng9vj" to be fully deleted
May 29 01:00:20.176: INFO: Pod exited without failure; the volume has been recycled.
May 29 01:00:20.176: INFO: Removing second PVC, waiting for the recycler to finish before cleanup.
May 29 01:00:20.176: INFO: Deleting PVC pvc-89d8t to trigger reclamation of PV 
May 29 01:00:20.176: INFO: Deleting PersistentVolumeClaim "pvc-89d8t"
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    when invoking the Recycle reclaim policy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:265
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":1,"skipped":16,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:44.786: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 95 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:00:44.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7341" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":5,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:45.254: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:00:46.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2050" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":6,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:47.130: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 159 lines ...
• [SLOW TEST:27.099 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:47.455: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 56 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-2f0418ce-b25e-4dda-a6f3-cbce670e86ce
STEP: Creating a pod to test consume secrets
May 29 01:00:41.762: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1" in namespace "projected-4940" to be "Succeeded or Failed"
May 29 01:00:41.922: INFO: Pod "pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 159.584056ms
May 29 01:00:44.080: INFO: Pod "pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317324106s
May 29 01:00:46.237: INFO: Pod "pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474881105s
May 29 01:00:48.395: INFO: Pod "pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632432713s
May 29 01:00:50.552: INFO: Pod "pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.789918919s
STEP: Saw pod success
May 29 01:00:50.552: INFO: Pod "pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1" satisfied condition "Succeeded or Failed"
May 29 01:00:50.710: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 29 01:00:51.039: INFO: Waiting for pod pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1 to disappear
May 29 01:00:51.198: INFO: Pod pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 5 lines ...
• [SLOW TEST:11.746 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":3,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
May 29 01:00:34.841: INFO: PersistentVolumeClaim pvc-qpr52 found but phase is Pending instead of Bound.
May 29 01:00:36.999: INFO: PersistentVolumeClaim pvc-qpr52 found and phase=Bound (2.316042946s)
May 29 01:00:36.999: INFO: Waiting up to 3m0s for PersistentVolume local-j5fcg to have phase Bound
May 29 01:00:37.157: INFO: PersistentVolume local-j5fcg found and phase=Bound (157.962374ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-s976
STEP: Creating a pod to test subpath
May 29 01:00:37.647: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-s976" in namespace "provisioning-5766" to be "Succeeded or Failed"
May 29 01:00:37.806: INFO: Pod "pod-subpath-test-preprovisionedpv-s976": Phase="Pending", Reason="", readiness=false. Elapsed: 158.683944ms
May 29 01:00:39.965: INFO: Pod "pod-subpath-test-preprovisionedpv-s976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317311553s
May 29 01:00:42.125: INFO: Pod "pod-subpath-test-preprovisionedpv-s976": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477561265s
May 29 01:00:44.284: INFO: Pod "pod-subpath-test-preprovisionedpv-s976": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636312438s
May 29 01:00:46.442: INFO: Pod "pod-subpath-test-preprovisionedpv-s976": Phase="Pending", Reason="", readiness=false. Elapsed: 8.794783391s
May 29 01:00:48.615: INFO: Pod "pod-subpath-test-preprovisionedpv-s976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.967573645s
STEP: Saw pod success
May 29 01:00:48.615: INFO: Pod "pod-subpath-test-preprovisionedpv-s976" satisfied condition "Succeeded or Failed"
May 29 01:00:48.774: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-s976 container test-container-subpath-preprovisionedpv-s976: <nil>
STEP: delete the pod
May 29 01:00:49.100: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-s976 to disappear
May 29 01:00:49.259: INFO: Pod pod-subpath-test-preprovisionedpv-s976 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-s976
May 29 01:00:49.259: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-s976" in namespace "provisioning-5766"
... skipping 47 lines ...
STEP: creating a claim
May 29 01:00:35.974: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 29 01:00:36.136: INFO: Waiting up to 5m0s for PersistentVolumeClaims [nfs4zjfv] to have phase Bound
May 29 01:00:36.297: INFO: PersistentVolumeClaim nfs4zjfv found and phase=Bound (160.845807ms)
STEP: Creating pod pod-subpath-test-dynamicpv-9fxv
STEP: Creating a pod to test subpath
May 29 01:00:36.782: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9fxv" in namespace "provisioning-9878" to be "Succeeded or Failed"
May 29 01:00:36.943: INFO: Pod "pod-subpath-test-dynamicpv-9fxv": Phase="Pending", Reason="", readiness=false. Elapsed: 160.925439ms
May 29 01:00:39.104: INFO: Pod "pod-subpath-test-dynamicpv-9fxv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32248236s
May 29 01:00:41.268: INFO: Pod "pod-subpath-test-dynamicpv-9fxv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486689026s
May 29 01:00:43.432: INFO: Pod "pod-subpath-test-dynamicpv-9fxv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649806618s
May 29 01:00:45.600: INFO: Pod "pod-subpath-test-dynamicpv-9fxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.818523695s
STEP: Saw pod success
May 29 01:00:45.600: INFO: Pod "pod-subpath-test-dynamicpv-9fxv" satisfied condition "Succeeded or Failed"
May 29 01:00:45.761: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-9fxv container test-container-volume-dynamicpv-9fxv: <nil>
STEP: delete the pod
May 29 01:00:46.093: INFO: Waiting for pod pod-subpath-test-dynamicpv-9fxv to disappear
May 29 01:00:46.254: INFO: Pod pod-subpath-test-dynamicpv-9fxv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9fxv
May 29 01:00:46.254: INFO: Deleting pod "pod-subpath-test-dynamicpv-9fxv" in namespace "provisioning-9878"
... skipping 47 lines ...
May 29 01:00:37.620: INFO: PersistentVolumeClaim pvc-bc22p found and phase=Bound (159.704983ms)
May 29 01:00:37.620: INFO: Waiting up to 3m0s for PersistentVolume nfs-2rfph to have phase Bound
May 29 01:00:37.779: INFO: PersistentVolume nfs-2rfph found and phase=Bound (158.932663ms)
STEP: Checking pod has write access to PersistentVolume
May 29 01:00:38.098: INFO: Creating nfs test pod
May 29 01:00:38.258: INFO: Pod should terminate with exitcode 0 (success)
May 29 01:00:38.258: INFO: Waiting up to 5m0s for pod "pvc-tester-mp7tr" in namespace "pv-3740" to be "Succeeded or Failed"
May 29 01:00:38.416: INFO: Pod "pvc-tester-mp7tr": Phase="Pending", Reason="", readiness=false. Elapsed: 158.663073ms
May 29 01:00:40.576: INFO: Pod "pvc-tester-mp7tr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317865593s
May 29 01:00:42.735: INFO: Pod "pvc-tester-mp7tr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47729496s
May 29 01:00:44.898: INFO: Pod "pvc-tester-mp7tr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.639682087s
May 29 01:00:47.057: INFO: Pod "pvc-tester-mp7tr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.799052001s
May 29 01:00:49.216: INFO: Pod "pvc-tester-mp7tr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.958170355s
STEP: Saw pod success
May 29 01:00:49.216: INFO: Pod "pvc-tester-mp7tr" satisfied condition "Succeeded or Failed"
May 29 01:00:49.216: INFO: Pod pvc-tester-mp7tr succeeded 
May 29 01:00:49.216: INFO: Deleting pod "pvc-tester-mp7tr" in namespace "pv-3740"
May 29 01:00:49.386: INFO: Wait up to 5m0s for pod "pvc-tester-mp7tr" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
May 29 01:00:49.546: INFO: Deleting PVC pvc-bc22p to trigger reclamation of PV 
May 29 01:00:49.546: INFO: Deleting PersistentVolumeClaim "pvc-bc22p"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:59.165: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 93 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:59.200: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 31 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:00:59.937: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 106 lines ...
• [SLOW TEST:75.161 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:01.118: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 162 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  [k8s.io] [sig-node] Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":4,"skipped":53,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:01.797: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 25 lines ...
May 29 01:00:47.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
May 29 01:00:48.002: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 29 01:00:48.319: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-163" in namespace "provisioning-163" to be "Succeeded or Failed"
May 29 01:00:48.476: INFO: Pod "hostpath-symlink-prep-provisioning-163": Phase="Pending", Reason="", readiness=false. Elapsed: 156.884309ms
May 29 01:00:50.634: INFO: Pod "hostpath-symlink-prep-provisioning-163": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314037516s
May 29 01:00:52.791: INFO: Pod "hostpath-symlink-prep-provisioning-163": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471165118s
May 29 01:00:54.948: INFO: Pod "hostpath-symlink-prep-provisioning-163": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.628819993s
STEP: Saw pod success
May 29 01:00:54.948: INFO: Pod "hostpath-symlink-prep-provisioning-163" satisfied condition "Succeeded or Failed"
May 29 01:00:54.948: INFO: Deleting pod "hostpath-symlink-prep-provisioning-163" in namespace "provisioning-163"
May 29 01:00:55.111: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-163" to be fully deleted
May 29 01:00:55.283: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fwdq
STEP: Creating a pod to test subpath
May 29 01:00:55.441: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fwdq" in namespace "provisioning-163" to be "Succeeded or Failed"
May 29 01:00:55.598: INFO: Pod "pod-subpath-test-inlinevolume-fwdq": Phase="Pending", Reason="", readiness=false. Elapsed: 156.629237ms
May 29 01:00:57.755: INFO: Pod "pod-subpath-test-inlinevolume-fwdq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.313579853s
STEP: Saw pod success
May 29 01:00:57.755: INFO: Pod "pod-subpath-test-inlinevolume-fwdq" satisfied condition "Succeeded or Failed"
May 29 01:00:57.912: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-fwdq container test-container-subpath-inlinevolume-fwdq: <nil>
STEP: delete the pod
May 29 01:00:58.238: INFO: Waiting for pod pod-subpath-test-inlinevolume-fwdq to disappear
May 29 01:00:58.394: INFO: Pod pod-subpath-test-inlinevolume-fwdq no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fwdq
May 29 01:00:58.394: INFO: Deleting pod "pod-subpath-test-inlinevolume-fwdq" in namespace "provisioning-163"
STEP: Deleting pod
May 29 01:00:58.551: INFO: Deleting pod "pod-subpath-test-inlinevolume-fwdq" in namespace "provisioning-163"
May 29 01:00:58.865: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-163" in namespace "provisioning-163" to be "Succeeded or Failed"
May 29 01:00:59.022: INFO: Pod "hostpath-symlink-prep-provisioning-163": Phase="Pending", Reason="", readiness=false. Elapsed: 156.507415ms
May 29 01:01:01.179: INFO: Pod "hostpath-symlink-prep-provisioning-163": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.31322089s
STEP: Saw pod success
May 29 01:01:01.179: INFO: Pod "hostpath-symlink-prep-provisioning-163" satisfied condition "Succeeded or Failed"
May 29 01:01:01.179: INFO: Deleting pod "hostpath-symlink-prep-provisioning-163" in namespace "provisioning-163"
May 29 01:01:01.341: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-163" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:01.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-163" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":25,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":5,"skipped":51,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:53.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-0cf8bcb7-d4d8-46c4-ab78-12aedabad554
STEP: Creating a pod to test consume configMaps
May 29 01:00:54.751: INFO: Waiting up to 5m0s for pod "pod-configmaps-3978477b-41b6-410d-949d-dba22e216125" in namespace "configmap-2716" to be "Succeeded or Failed"
May 29 01:00:54.910: INFO: Pod "pod-configmaps-3978477b-41b6-410d-949d-dba22e216125": Phase="Pending", Reason="", readiness=false. Elapsed: 158.145587ms
May 29 01:00:57.069: INFO: Pod "pod-configmaps-3978477b-41b6-410d-949d-dba22e216125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31727606s
May 29 01:00:59.227: INFO: Pod "pod-configmaps-3978477b-41b6-410d-949d-dba22e216125": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475885583s
May 29 01:01:01.386: INFO: Pod "pod-configmaps-3978477b-41b6-410d-949d-dba22e216125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.634305972s
STEP: Saw pod success
May 29 01:01:01.386: INFO: Pod "pod-configmaps-3978477b-41b6-410d-949d-dba22e216125" satisfied condition "Succeeded or Failed"
May 29 01:01:01.544: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-configmaps-3978477b-41b6-410d-949d-dba22e216125 container agnhost-container: <nil>
STEP: delete the pod
May 29 01:01:01.899: INFO: Waiting for pod pod-configmaps-3978477b-41b6-410d-949d-dba22e216125 to disappear
May 29 01:01:02.064: INFO: Pod pod-configmaps-3978477b-41b6-410d-949d-dba22e216125 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.819 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 62 lines ...
May 29 01:00:14.588: INFO: PersistentVolumeClaim csi-hostpathghgrx found but phase is Pending instead of Bound.
May 29 01:00:16.750: INFO: PersistentVolumeClaim csi-hostpathghgrx found but phase is Pending instead of Bound.
May 29 01:00:18.912: INFO: PersistentVolumeClaim csi-hostpathghgrx found but phase is Pending instead of Bound.
May 29 01:00:21.076: INFO: PersistentVolumeClaim csi-hostpathghgrx found and phase=Bound (26.156715168s)
STEP: Creating pod pod-subpath-test-dynamicpv-95t7
STEP: Creating a pod to test subpath
May 29 01:00:21.579: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-95t7" in namespace "provisioning-6951" to be "Succeeded or Failed"
May 29 01:00:21.741: INFO: Pod "pod-subpath-test-dynamicpv-95t7": Phase="Pending", Reason="", readiness=false. Elapsed: 161.874413ms
May 29 01:00:23.903: INFO: Pod "pod-subpath-test-dynamicpv-95t7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323910818s
May 29 01:00:26.065: INFO: Pod "pod-subpath-test-dynamicpv-95t7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485984001s
May 29 01:00:28.230: INFO: Pod "pod-subpath-test-dynamicpv-95t7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651563832s
May 29 01:00:30.392: INFO: Pod "pod-subpath-test-dynamicpv-95t7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.813267433s
May 29 01:00:32.554: INFO: Pod "pod-subpath-test-dynamicpv-95t7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.975101022s
May 29 01:00:34.716: INFO: Pod "pod-subpath-test-dynamicpv-95t7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.137047572s
May 29 01:00:36.878: INFO: Pod "pod-subpath-test-dynamicpv-95t7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.299055848s
STEP: Saw pod success
May 29 01:00:36.878: INFO: Pod "pod-subpath-test-dynamicpv-95t7" satisfied condition "Succeeded or Failed"
May 29 01:00:37.040: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-95t7 container test-container-volume-dynamicpv-95t7: <nil>
STEP: delete the pod
May 29 01:00:37.450: INFO: Waiting for pod pod-subpath-test-dynamicpv-95t7 to disappear
May 29 01:00:37.612: INFO: Pod pod-subpath-test-dynamicpv-95t7 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-95t7
May 29 01:00:37.612: INFO: Deleting pod "pod-subpath-test-dynamicpv-95t7" in namespace "provisioning-6951"
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 37 lines ...
May 29 01:01:02.955: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.125 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:03.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-379" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:03.879: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:04.995: INFO: Driver emptydir doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 23 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:265
------------------------------
... skipping 27 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-eed63e0e-b6b2-42ca-88db-60ca80f1438c
STEP: Creating a pod to test consume configMaps
May 29 01:01:02.299: INFO: Waiting up to 5m0s for pod "pod-configmaps-23da9809-9eab-4003-a207-cc9231750f63" in namespace "configmap-9894" to be "Succeeded or Failed"
May 29 01:01:02.463: INFO: Pod "pod-configmaps-23da9809-9eab-4003-a207-cc9231750f63": Phase="Pending", Reason="", readiness=false. Elapsed: 164.149501ms
May 29 01:01:04.623: INFO: Pod "pod-configmaps-23da9809-9eab-4003-a207-cc9231750f63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.324277295s
STEP: Saw pod success
May 29 01:01:04.623: INFO: Pod "pod-configmaps-23da9809-9eab-4003-a207-cc9231750f63" satisfied condition "Succeeded or Failed"
May 29 01:01:04.794: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-configmaps-23da9809-9eab-4003-a207-cc9231750f63 container agnhost-container: <nil>
STEP: delete the pod
May 29 01:01:05.158: INFO: Waiting for pod pod-configmaps-23da9809-9eab-4003-a207-cc9231750f63 to disappear
May 29 01:01:05.318: INFO: Pod pod-configmaps-23da9809-9eab-4003-a207-cc9231750f63 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:05.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9894" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:05.681: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 144 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:06.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:06.266: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 85 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
May 29 01:01:00.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614" in namespace "projected-5855" to be "Succeeded or Failed"
May 29 01:01:01.123: INFO: Pod "downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614": Phase="Pending", Reason="", readiness=false. Elapsed: 161.89709ms
May 29 01:01:03.285: INFO: Pod "downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323365019s
May 29 01:01:05.446: INFO: Pod "downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.485259748s
STEP: Saw pod success
May 29 01:01:05.446: INFO: Pod "downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614" satisfied condition "Succeeded or Failed"
May 29 01:01:05.609: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614 container client-container: <nil>
STEP: delete the pod
May 29 01:01:05.949: INFO: Waiting for pod downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614 to disappear
May 29 01:01:06.110: INFO: Pod downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.467 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:06.478: INFO: Driver local doesn't support ext3 -- skipping
... skipping 37 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:58.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-map-cd63fe4f-7886-4adf-a267-0934a190bdb8
STEP: Creating a pod to test consume secrets
May 29 01:00:59.185: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000" in namespace "projected-1486" to be "Succeeded or Failed"
May 29 01:00:59.351: INFO: Pod "pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000": Phase="Pending", Reason="", readiness=false. Elapsed: 165.689399ms
May 29 01:01:01.515: INFO: Pod "pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329248067s
May 29 01:01:03.681: INFO: Pod "pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495860404s
May 29 01:01:05.843: INFO: Pod "pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.657929797s
STEP: Saw pod success
May 29 01:01:05.843: INFO: Pod "pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000" satisfied condition "Succeeded or Failed"
May 29 01:01:06.006: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 29 01:01:06.340: INFO: Waiting for pod pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000 to disappear
May 29 01:01:06.505: INFO: Pod pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.777 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:06.847: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
May 29 00:59:46.923: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 29 00:59:46.923: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 29 00:59:46.923: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-9190-aws-scfdb4p      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-9190    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-9190-aws-scfdb4p,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-9190    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-9190-aws-scfdb4p,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-9190-aws-scfdb4p
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May 29 00:59:47.581: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-t84gz" in namespace "provisioning-9190" to be "Succeeded or Failed"
May 29 00:59:47.762: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 180.838426ms
May 29 00:59:49.923: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34226203s
May 29 00:59:52.085: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503776322s
May 29 00:59:54.248: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667332972s
May 29 00:59:56.409: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.828542423s
May 29 00:59:58.571: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.989920681s
... skipping 2 lines ...
May 29 01:00:05.058: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.476983593s
May 29 01:00:07.219: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 19.638124251s
May 29 01:00:09.380: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 21.799345458s
May 29 01:00:11.543: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Pending", Reason="", readiness=false. Elapsed: 23.962052109s
May 29 01:00:13.705: INFO: Pod "pvc-volume-tester-writer-t84gz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.124338686s
STEP: Saw pod success
May 29 01:00:13.705: INFO: Pod "pvc-volume-tester-writer-t84gz" satisfied condition "Succeeded or Failed"
May 29 01:00:14.044: INFO: Pod pvc-volume-tester-writer-t84gz has the following logs: 
May 29 01:00:14.044: INFO: Deleting pod "pvc-volume-tester-writer-t84gz" in namespace "provisioning-9190"
May 29 01:00:14.221: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-t84gz" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-52-235.ap-northeast-2.compute.internal"
May 29 01:00:14.868: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-pq2hv" in namespace "provisioning-9190" to be "Succeeded or Failed"
May 29 01:00:15.029: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 160.828903ms
May 29 01:00:17.191: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322854085s
May 29 01:00:19.352: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484203991s
May 29 01:00:21.518: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649516189s
May 29 01:00:23.682: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.813595037s
May 29 01:00:25.843: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.974875926s
... skipping 2 lines ...
May 29 01:00:32.336: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 17.467474167s
May 29 01:00:34.497: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 19.628605016s
May 29 01:00:36.658: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 21.789792438s
May 29 01:00:38.819: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Pending", Reason="", readiness=false. Elapsed: 23.951122342s
May 29 01:00:40.981: INFO: Pod "pvc-volume-tester-reader-pq2hv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.112878904s
STEP: Saw pod success
May 29 01:00:40.981: INFO: Pod "pvc-volume-tester-reader-pq2hv" satisfied condition "Succeeded or Failed"
May 29 01:00:41.160: INFO: Pod pvc-volume-tester-reader-pq2hv has the following logs: hello world

May 29 01:00:41.160: INFO: Deleting pod "pvc-volume-tester-reader-pq2hv" in namespace "provisioning-9190"
May 29 01:00:41.536: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-pq2hv" to be fully deleted
May 29 01:00:41.698: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-qtr5x] to have phase Bound
May 29 01:00:41.859: INFO: PersistentVolumeClaim pvc-qtr5x found and phase=Bound (161.002809ms)
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:09.156: INFO: Only supported for providers [azure] (not aws)
... skipping 142 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:11.046: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 88 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":5,"skipped":63,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:13.508: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 58 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:42.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 354 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":4,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:17.199: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 90 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:17.510: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 39 lines ...
• [SLOW TEST:59.823 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be restarted with a docker exec liveness probe with timeout 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:216
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout ","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:18.890: INFO: Only supported for providers [vsphere] (not aws)
... skipping 67 lines ...
• [SLOW TEST:5.962 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1911
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":6,"skipped":75,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:19.543: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:19.659: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:19.659: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 100 lines ...
May 29 01:01:06.342: INFO: PersistentVolumeClaim pvc-rs9gr found but phase is Pending instead of Bound.
May 29 01:01:08.501: INFO: PersistentVolumeClaim pvc-rs9gr found and phase=Bound (2.318171433s)
May 29 01:01:08.501: INFO: Waiting up to 3m0s for PersistentVolume local-nwqkl to have phase Bound
May 29 01:01:08.660: INFO: PersistentVolume local-nwqkl found and phase=Bound (158.770603ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nn9h
STEP: Creating a pod to test subpath
May 29 01:01:09.139: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nn9h" in namespace "provisioning-7264" to be "Succeeded or Failed"
May 29 01:01:09.300: INFO: Pod "pod-subpath-test-preprovisionedpv-nn9h": Phase="Pending", Reason="", readiness=false. Elapsed: 159.412291ms
May 29 01:01:11.461: INFO: Pod "pod-subpath-test-preprovisionedpv-nn9h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320693469s
May 29 01:01:13.621: INFO: Pod "pod-subpath-test-preprovisionedpv-nn9h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480109096s
May 29 01:01:15.780: INFO: Pod "pod-subpath-test-preprovisionedpv-nn9h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.639570707s
May 29 01:01:17.939: INFO: Pod "pod-subpath-test-preprovisionedpv-nn9h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.798779793s
STEP: Saw pod success
May 29 01:01:17.940: INFO: Pod "pod-subpath-test-preprovisionedpv-nn9h" satisfied condition "Succeeded or Failed"
May 29 01:01:18.099: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-nn9h container test-container-volume-preprovisionedpv-nn9h: <nil>
STEP: delete the pod
May 29 01:01:18.429: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nn9h to disappear
May 29 01:01:18.588: INFO: Pod pod-subpath-test-preprovisionedpv-nn9h no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nn9h
May 29 01:01:18.588: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nn9h" in namespace "provisioning-7264"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:20.794: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:20.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8914" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:21.061: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 108 lines ...
• [SLOW TEST:13.277 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":6,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:21.339: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 87 lines ...
May 29 01:01:17.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on node default medium
May 29 01:01:18.217: INFO: Waiting up to 5m0s for pod "pod-e4a8e92a-1e07-491e-8691-b77e369276ce" in namespace "emptydir-5772" to be "Succeeded or Failed"
May 29 01:01:18.379: INFO: Pod "pod-e4a8e92a-1e07-491e-8691-b77e369276ce": Phase="Pending", Reason="", readiness=false. Elapsed: 162.014297ms
May 29 01:01:20.540: INFO: Pod "pod-e4a8e92a-1e07-491e-8691-b77e369276ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.323318596s
STEP: Saw pod success
May 29 01:01:20.540: INFO: Pod "pod-e4a8e92a-1e07-491e-8691-b77e369276ce" satisfied condition "Succeeded or Failed"
May 29 01:01:20.702: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod pod-e4a8e92a-1e07-491e-8691-b77e369276ce container test-container: <nil>
STEP: delete the pod
May 29 01:01:21.035: INFO: Waiting for pod pod-e4a8e92a-1e07-491e-8691-b77e369276ce to disappear
May 29 01:01:21.199: INFO: Pod pod-e4a8e92a-1e07-491e-8691-b77e369276ce no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:21.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5772" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":16,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:53.566: INFO: >>> kubeConfig: /root/.kube/config
... skipping 12 lines ...
May 29 01:01:01.816: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 29 01:01:01.975: INFO: Waiting up to 5m0s for PersistentVolumeClaims [nfsv8bgn] to have phase Bound
May 29 01:01:02.134: INFO: PersistentVolumeClaim nfsv8bgn found but phase is Pending instead of Bound.
May 29 01:01:04.294: INFO: PersistentVolumeClaim nfsv8bgn found and phase=Bound (2.319285119s)
STEP: Creating pod pod-subpath-test-dynamicpv-h2bt
STEP: Creating a pod to test subpath
May 29 01:01:04.771: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-h2bt" in namespace "provisioning-5472" to be "Succeeded or Failed"
May 29 01:01:04.943: INFO: Pod "pod-subpath-test-dynamicpv-h2bt": Phase="Pending", Reason="", readiness=false. Elapsed: 172.13716ms
May 29 01:01:07.103: INFO: Pod "pod-subpath-test-dynamicpv-h2bt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331531126s
May 29 01:01:09.261: INFO: Pod "pod-subpath-test-dynamicpv-h2bt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490403065s
May 29 01:01:11.420: INFO: Pod "pod-subpath-test-dynamicpv-h2bt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.649456459s
STEP: Saw pod success
May 29 01:01:11.421: INFO: Pod "pod-subpath-test-dynamicpv-h2bt" satisfied condition "Succeeded or Failed"
May 29 01:01:11.583: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-h2bt container test-container-subpath-dynamicpv-h2bt: <nil>
STEP: delete the pod
May 29 01:01:11.931: INFO: Waiting for pod pod-subpath-test-dynamicpv-h2bt to disappear
May 29 01:01:12.092: INFO: Pod pod-subpath-test-dynamicpv-h2bt no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-h2bt
May 29 01:01:12.092: INFO: Deleting pod "pod-subpath-test-dynamicpv-h2bt" in namespace "provisioning-5472"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:21.907: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 106 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:01:11.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:22.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6580" for this suite.


• [SLOW TEST:11.669 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:22.795: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 90 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:13.712: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
May 29 01:00:14.540: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-2123-aws-sc9f68c
STEP: creating a claim
May 29 01:00:14.702: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-ssfj
STEP: Creating a pod to test exec-volume-test
May 29 01:00:15.201: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-ssfj" in namespace "volume-2123" to be "Succeeded or Failed"
May 29 01:00:15.362: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 161.470213ms
May 29 01:00:17.526: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325025831s
May 29 01:00:19.692: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490988063s
May 29 01:00:21.855: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.654148496s
May 29 01:00:24.020: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.819120445s
May 29 01:00:26.182: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.980959422s
... skipping 9 lines ...
May 29 01:00:47.911: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 32.710425904s
May 29 01:00:50.076: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 34.875290776s
May 29 01:00:52.254: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 37.052957639s
May 29 01:00:54.415: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Pending", Reason="", readiness=false. Elapsed: 39.214754117s
May 29 01:00:56.577: INFO: Pod "exec-volume-test-dynamicpv-ssfj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.376683106s
STEP: Saw pod success
May 29 01:00:56.578: INFO: Pod "exec-volume-test-dynamicpv-ssfj" satisfied condition "Succeeded or Failed"
May 29 01:00:56.740: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod exec-volume-test-dynamicpv-ssfj container exec-container-dynamicpv-ssfj: <nil>
STEP: delete the pod
May 29 01:00:57.077: INFO: Waiting for pod exec-volume-test-dynamicpv-ssfj to disappear
May 29 01:00:57.239: INFO: Pod exec-volume-test-dynamicpv-ssfj no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-ssfj
May 29 01:00:57.239: INFO: Deleting pod "exec-volume-test-dynamicpv-ssfj" in namespace "volume-2123"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:24.461: INFO: Only supported for providers [gce gke] (not aws)
... skipping 31 lines ...
May 29 01:00:52.515: INFO: Creating resource for dynamic PV
May 29 01:00:52.515: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-5208-aws-sc4zqpq
STEP: creating a claim
STEP: Expanding non-expandable pvc
May 29 01:00:52.993: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
May 29 01:00:53.318: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:00:55.635: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:00:57.635: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:00:59.635: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:01.635: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:03.638: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:05.638: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:07.635: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:09.634: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:11.637: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:13.634: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:15.634: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:17.634: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:19.669: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:21.643: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:23.817: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5208-aws-sc4zqpq",
  	... // 2 identical fields
  }

May 29 01:01:24.149: INFO: Error updating pvc aws6mntx: PersistentVolumeClaim "aws6mntx" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:24.987: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-downwardapi-7f2r
STEP: Creating a pod to test atomic-volume-subpath
May 29 01:01:04.047: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7f2r" in namespace "subpath-3345" to be "Succeeded or Failed"
May 29 01:01:04.212: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Pending", Reason="", readiness=false. Elapsed: 164.803057ms
May 29 01:01:06.378: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 2.330813268s
May 29 01:01:08.543: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 4.495502169s
May 29 01:01:10.714: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 6.666694896s
May 29 01:01:12.881: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 8.833844702s
May 29 01:01:15.046: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 10.998833429s
May 29 01:01:17.211: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 13.163971789s
May 29 01:01:19.376: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 15.328511307s
May 29 01:01:21.553: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 17.505509564s
May 29 01:01:23.782: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Running", Reason="", readiness=true. Elapsed: 19.735043119s
May 29 01:01:25.949: INFO: Pod "pod-subpath-test-downwardapi-7f2r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.901381287s
STEP: Saw pod success
May 29 01:01:25.949: INFO: Pod "pod-subpath-test-downwardapi-7f2r" satisfied condition "Succeeded or Failed"
May 29 01:01:26.116: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod pod-subpath-test-downwardapi-7f2r container test-container-subpath-downwardapi-7f2r: <nil>
STEP: delete the pod
May 29 01:01:26.480: INFO: Waiting for pod pod-subpath-test-downwardapi-7f2r to disappear
May 29 01:01:26.650: INFO: Pod pod-subpath-test-downwardapi-7f2r no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-7f2r
May 29 01:01:26.650: INFO: Deleting pod "pod-subpath-test-downwardapi-7f2r" in namespace "subpath-3345"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 423 lines ...
• [SLOW TEST:12.845 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:30.403: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 130 lines ...
• [SLOW TEST:23.564 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:01:24.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-28e0059f-8541-444b-8f0c-56c2e6baab72
STEP: Creating a pod to test consume configMaps
May 29 01:01:25.636: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a" in namespace "configmap-124" to be "Succeeded or Failed"
May 29 01:01:25.798: INFO: Pod "pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a": Phase="Pending", Reason="", readiness=false. Elapsed: 162.431366ms
May 29 01:01:27.981: INFO: Pod "pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34547063s
May 29 01:01:30.144: INFO: Pod "pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508416225s
May 29 01:01:32.306: INFO: Pod "pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.6704119s
STEP: Saw pod success
May 29 01:01:32.306: INFO: Pod "pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a" satisfied condition "Succeeded or Failed"
May 29 01:01:32.468: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a container agnhost-container: <nil>
STEP: delete the pod
May 29 01:01:32.884: INFO: Waiting for pod pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a to disappear
May 29 01:01:33.045: INFO: Pod pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.901 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:33.382: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 23 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
... skipping 199 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:33.709: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 138 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 195 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:555
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:584
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":2,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:34.442: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 99 lines ...
• [SLOW TEST:12.989 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":6,"skipped":18,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:34.597: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:49.858: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
May 29 01:00:50.661: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1926-aws-sc7qccp
STEP: creating a claim
May 29 01:00:50.820: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-xsjw
STEP: Creating a pod to test subpath
May 29 01:00:51.300: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xsjw" in namespace "provisioning-1926" to be "Succeeded or Failed"
May 29 01:00:51.459: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 158.969019ms
May 29 01:00:53.619: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318347397s
May 29 01:00:55.778: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477685004s
May 29 01:00:57.937: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636865634s
May 29 01:01:00.096: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796150479s
May 29 01:01:02.256: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.955462222s
... skipping 4 lines ...
May 29 01:01:13.056: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 21.755330752s
May 29 01:01:15.215: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 23.914666277s
May 29 01:01:17.374: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 26.073770366s
May 29 01:01:19.544: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Pending", Reason="", readiness=false. Elapsed: 28.243401055s
May 29 01:01:21.705: INFO: Pod "pod-subpath-test-dynamicpv-xsjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.405183432s
STEP: Saw pod success
May 29 01:01:21.705: INFO: Pod "pod-subpath-test-dynamicpv-xsjw" satisfied condition "Succeeded or Failed"
May 29 01:01:21.868: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-xsjw container test-container-subpath-dynamicpv-xsjw: <nil>
STEP: delete the pod
May 29 01:01:22.216: INFO: Waiting for pod pod-subpath-test-dynamicpv-xsjw to disappear
May 29 01:01:22.374: INFO: Pod pod-subpath-test-dynamicpv-xsjw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xsjw
May 29 01:01:22.375: INFO: Deleting pod "pod-subpath-test-dynamicpv-xsjw" in namespace "provisioning-1926"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:39.348: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 178 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:39.704: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 36 lines ...
May 29 01:01:34.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on node default medium
May 29 01:01:35.583: INFO: Waiting up to 5m0s for pod "pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020" in namespace "emptydir-1612" to be "Succeeded or Failed"
May 29 01:01:35.744: INFO: Pod "pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020": Phase="Pending", Reason="", readiness=false. Elapsed: 161.358863ms
May 29 01:01:37.912: INFO: Pod "pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329030945s
May 29 01:01:40.076: INFO: Pod "pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493080061s
May 29 01:01:42.238: INFO: Pod "pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.655144868s
STEP: Saw pod success
May 29 01:01:42.238: INFO: Pod "pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020" satisfied condition "Succeeded or Failed"
May 29 01:01:42.401: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020 container test-container: <nil>
STEP: delete the pod
May 29 01:01:42.745: INFO: Waiting for pod pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020 to disappear
May 29 01:01:42.906: INFO: Pod pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.625 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":27,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:43.299: INFO: Only supported for providers [gce gke] (not aws)
... skipping 45 lines ...
STEP: Building a namespace api object, basename pvc-protection
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:71
May 29 01:01:44.136: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
STEP: Creating a PVC
May 29 01:01:44.460: INFO: error finding default storageClass : No default storage class found
[AfterEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:44.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pvc-protection-5618" for this suite.
[AfterEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:106
... skipping 2 lines ...
S [SKIPPING] in Spec Setup (BeforeEach) [1.473 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:124

  error finding default storageClass : No default storage class found

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:830
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:44.811: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
May 29 01:01:39.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in container's command
May 29 01:01:40.713: INFO: Waiting up to 5m0s for pod "var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4" in namespace "var-expansion-3419" to be "Succeeded or Failed"
May 29 01:01:40.876: INFO: Pod "var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4": Phase="Pending", Reason="", readiness=false. Elapsed: 162.36249ms
May 29 01:01:43.038: INFO: Pod "var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32505494s
May 29 01:01:45.201: INFO: Pod "var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.487743337s
STEP: Saw pod success
May 29 01:01:45.201: INFO: Pod "var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4" satisfied condition "Succeeded or Failed"
May 29 01:01:45.364: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4 container dapi-container: <nil>
STEP: delete the pod
May 29 01:01:45.703: INFO: Waiting for pod var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4 to disappear
May 29 01:01:45.865: INFO: Pod var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:46.268: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":7,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:01:48.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-1716" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete","total":-1,"completed":8,"skipped":88,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:01:49.344: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 45 lines ...
• [SLOW TEST:8.500 seconds]
[k8s.io] [sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:155
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:02.046: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 44 lines ...
May 29 01:01:47.687: INFO: Unable to read jessie_udp@dns-test-service.dns-7663 from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:47.850: INFO: Unable to read jessie_tcp@dns-test-service.dns-7663 from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:48.011: INFO: Unable to read jessie_udp@dns-test-service.dns-7663.svc from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:48.175: INFO: Unable to read jessie_tcp@dns-test-service.dns-7663.svc from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:48.373: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7663.svc from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:48.533: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7663.svc from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:49.490: INFO: Lookups using dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7663 wheezy_tcp@dns-test-service.dns-7663 wheezy_udp@dns-test-service.dns-7663.svc wheezy_tcp@dns-test-service.dns-7663.svc wheezy_udp@_http._tcp.dns-test-service.dns-7663.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7663.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7663 jessie_tcp@dns-test-service.dns-7663 jessie_udp@dns-test-service.dns-7663.svc jessie_tcp@dns-test-service.dns-7663.svc jessie_udp@_http._tcp.dns-test-service.dns-7663.svc jessie_tcp@_http._tcp.dns-test-service.dns-7663.svc]

May 29 01:01:54.650: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:54.809: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:54.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-7663 from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:55.133: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7663 from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:55.293: INFO: Unable to read wheezy_udp@dns-test-service.dns-7663.svc from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:55.455: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7663.svc from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:55.615: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7663.svc from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:55.794: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7663.svc from pod dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423: the server could not find the requested resource (get pods dns-test-dd21fbaa-6764-4b68-81af-481d2beef423)
May 29 01:01:59.036: INFO: Lookups using dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7663 wheezy_tcp@dns-test-service.dns-7663 wheezy_udp@dns-test-service.dns-7663.svc wheezy_tcp@dns-test-service.dns-7663.svc wheezy_udp@_http._tcp.dns-test-service.dns-7663.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7663.svc]

May 29 01:02:03.974: INFO: DNS probes using dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:41.661 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:01:46.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 103 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":4,"skipped":50,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:01:40.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 293 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1410
    should modify fsGroup if fsGroupPolicy=File
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1434
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":1,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:11.062: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 208 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:02:15.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6446" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:15.474: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 41 lines ...
May 29 01:02:04.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
May 29 01:02:05.778: INFO: Waiting up to 5m0s for pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408" in namespace "svcaccounts-247" to be "Succeeded or Failed"
May 29 01:02:05.943: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Pending", Reason="", readiness=false. Elapsed: 164.635922ms
May 29 01:02:08.102: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.323124874s
STEP: Saw pod success
May 29 01:02:08.102: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408" satisfied condition "Succeeded or Failed"
May 29 01:02:08.260: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 container agnhost-container: <nil>
STEP: delete the pod
May 29 01:02:08.583: INFO: Waiting for pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 to disappear
May 29 01:02:08.741: INFO: Pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 no longer exists
STEP: Creating a pod to test service account token: 
May 29 01:02:08.909: INFO: Waiting up to 5m0s for pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408" in namespace "svcaccounts-247" to be "Succeeded or Failed"
May 29 01:02:09.071: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Pending", Reason="", readiness=false. Elapsed: 162.247528ms
May 29 01:02:11.230: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.320741721s
STEP: Saw pod success
May 29 01:02:11.230: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408" satisfied condition "Succeeded or Failed"
May 29 01:02:11.389: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 container agnhost-container: <nil>
STEP: delete the pod
May 29 01:02:11.714: INFO: Waiting for pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 to disappear
May 29 01:02:11.873: INFO: Pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 no longer exists
STEP: Creating a pod to test service account token: 
May 29 01:02:12.033: INFO: Waiting up to 5m0s for pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408" in namespace "svcaccounts-247" to be "Succeeded or Failed"
May 29 01:02:12.193: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Pending", Reason="", readiness=false. Elapsed: 159.580999ms
May 29 01:02:14.351: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317867646s
May 29 01:02:16.509: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47617596s
May 29 01:02:18.668: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.634763907s
STEP: Saw pod success
May 29 01:02:18.668: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408" satisfied condition "Succeeded or Failed"
May 29 01:02:18.826: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 container agnhost-container: <nil>
STEP: delete the pod
May 29 01:02:19.154: INFO: Waiting for pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 to disappear
May 29 01:02:19.312: INFO: Pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 no longer exists
STEP: Creating a pod to test service account token: 
May 29 01:02:19.471: INFO: Waiting up to 5m0s for pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408" in namespace "svcaccounts-247" to be "Succeeded or Failed"
May 29 01:02:19.630: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Pending", Reason="", readiness=false. Elapsed: 158.031994ms
May 29 01:02:21.788: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316644834s
May 29 01:02:23.947: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.475083863s
STEP: Saw pod success
May 29 01:02:23.947: INFO: Pod "test-pod-751e8762-3c65-4585-b280-29e3e90ca408" satisfied condition "Succeeded or Failed"
May 29 01:02:24.105: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 container agnhost-container: <nil>
STEP: delete the pod
May 29 01:02:24.431: INFO: Waiting for pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 to disappear
May 29 01:02:24.589: INFO: Pod test-pod-751e8762-3c65-4585-b280-29e3e90ca408 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:20.085 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:24.936: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:97
------------------------------
... skipping 21 lines ...
May 29 01:02:06.391: INFO: PersistentVolumeClaim pvc-jq5kb found but phase is Pending instead of Bound.
May 29 01:02:08.557: INFO: PersistentVolumeClaim pvc-jq5kb found and phase=Bound (8.830164251s)
May 29 01:02:08.557: INFO: Waiting up to 3m0s for PersistentVolume local-9rbjq to have phase Bound
May 29 01:02:08.723: INFO: PersistentVolume local-9rbjq found and phase=Bound (166.762121ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qxj9
STEP: Creating a pod to test subpath
May 29 01:02:09.224: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qxj9" in namespace "provisioning-4276" to be "Succeeded or Failed"
May 29 01:02:09.390: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9": Phase="Pending", Reason="", readiness=false. Elapsed: 165.614004ms
May 29 01:02:11.556: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331294304s
May 29 01:02:13.727: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.502394652s
May 29 01:02:15.892: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667960732s
May 29 01:02:18.058: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.834038905s
STEP: Saw pod success
May 29 01:02:18.059: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9" satisfied condition "Succeeded or Failed"
May 29 01:02:18.224: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-qxj9 container test-container-subpath-preprovisionedpv-qxj9: <nil>
STEP: delete the pod
May 29 01:02:18.565: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qxj9 to disappear
May 29 01:02:18.731: INFO: Pod pod-subpath-test-preprovisionedpv-qxj9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qxj9
May 29 01:02:18.731: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qxj9" in namespace "provisioning-4276"
STEP: Creating pod pod-subpath-test-preprovisionedpv-qxj9
STEP: Creating a pod to test subpath
May 29 01:02:19.063: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qxj9" in namespace "provisioning-4276" to be "Succeeded or Failed"
May 29 01:02:19.228: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9": Phase="Pending", Reason="", readiness=false. Elapsed: 165.365995ms
May 29 01:02:21.394: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331530781s
May 29 01:02:23.561: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.497607233s
STEP: Saw pod success
May 29 01:02:23.561: INFO: Pod "pod-subpath-test-preprovisionedpv-qxj9" satisfied condition "Succeeded or Failed"
May 29 01:02:23.726: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-qxj9 container test-container-subpath-preprovisionedpv-qxj9: <nil>
STEP: delete the pod
May 29 01:02:24.078: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qxj9 to disappear
May 29 01:02:24.244: INFO: Pod pod-subpath-test-preprovisionedpv-qxj9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qxj9
May 29 01:02:24.244: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qxj9" in namespace "provisioning-4276"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:26.482: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 32 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:00:16.813: INFO: >>> kubeConfig: /root/.kube/config
... skipping 162 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":35,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:27.093: INFO: Only supported for providers [gce gke] (not aws)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:01:25.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
• [SLOW TEST:62.880 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] Mount propagation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 63 lines ...
May 29 01:01:49.507: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:01:50.578: INFO: Exec stderr: ""
May 29 01:01:53.058: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-1425"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-1425"/host; echo host > "/var/lib/kubelet/mount-propagation-1425"/host/file] Namespace:mount-propagation-1425 PodName:hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-7zzbx ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
May 29 01:01:53.058: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:01:54.271: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1425 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:01:54.271: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:01:55.328: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
May 29 01:01:55.487: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1425 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:01:55.487: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:01:56.542: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
May 29 01:01:56.701: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1425 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:01:56.701: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:01:57.806: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
May 29 01:01:57.964: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1425 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:01:57.964: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:01:59.023: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
May 29 01:01:59.182: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1425 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:01:59.182: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:00.232: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:00.393: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1425 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:00.393: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:01.501: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:01.660: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1425 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:01.660: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:02.737: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:02.895: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1425 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:02.895: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:03.968: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:04.126: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1425 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:04.126: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:05.183: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
May 29 01:02:05.341: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1425 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:05.341: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:06.441: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:06.600: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1425 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:06.600: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:07.643: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
May 29 01:02:07.802: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1425 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:07.802: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:08.858: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:09.034: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1425 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:09.034: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:10.316: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:10.474: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1425 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:10.474: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:11.515: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:11.678: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1425 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:11.678: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:12.731: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
May 29 01:02:12.889: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1425 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:12.889: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:14.177: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
May 29 01:02:14.337: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1425 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:14.337: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:15.392: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
May 29 01:02:15.550: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1425 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:15.550: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:16.673: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:16.832: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1425 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:16.832: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:17.904: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
May 29 01:02:18.068: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1425 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 29 01:02:18.068: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:19.074: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
May 29 01:02:19.074: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-1425"/master/file` = master] Namespace:mount-propagation-1425 PodName:hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-7zzbx ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
May 29 01:02:19.074: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:20.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-1425"/slave/file] Namespace:mount-propagation-1425 PodName:hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-7zzbx ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
May 29 01:02:20.102: INFO: >>> kubeConfig: /root/.kube/config
May 29 01:02:21.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-1425"/host] Namespace:mount-propagation-1425 PodName:hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-7zzbx ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
May 29 01:02:21.192: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
• [SLOW TEST:84.790 seconds]
[k8s.io] [sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should propagate mounts to the host
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":8,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:28.694: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 121 lines ...
• [SLOW TEST:24.088 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":7,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:29.216: INFO: Only supported for providers [vsphere] (not aws)
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:02:30.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4407" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:31.156: INFO: Only supported for providers [openstack] (not aws)
... skipping 36 lines ...
May 29 01:02:04.077: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 29 01:02:04.077: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 29 01:02:04.077: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-4900-nfs-sc5g7pj      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:example.com/nfs-provisioning-4900,Parameters:map[string]string{mountOptions: vers=4.1,},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4900    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4900-nfs-sc5g7pj,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4900    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4900-nfs-sc5g7pj,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-4900-nfs-sc5g7pj
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May 29 01:02:04.726: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-wm6lx" in namespace "provisioning-4900" to be "Succeeded or Failed"
May 29 01:02:04.887: INFO: Pod "pvc-volume-tester-writer-wm6lx": Phase="Pending", Reason="", readiness=false. Elapsed: 160.715501ms
May 29 01:02:07.048: INFO: Pod "pvc-volume-tester-writer-wm6lx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321865204s
May 29 01:02:09.209: INFO: Pod "pvc-volume-tester-writer-wm6lx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48293511s
May 29 01:02:11.371: INFO: Pod "pvc-volume-tester-writer-wm6lx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.644233203s
STEP: Saw pod success
May 29 01:02:11.371: INFO: Pod "pvc-volume-tester-writer-wm6lx" satisfied condition "Succeeded or Failed"
May 29 01:02:11.701: INFO: Pod pvc-volume-tester-writer-wm6lx has the following logs: 
May 29 01:02:11.701: INFO: Deleting pod "pvc-volume-tester-writer-wm6lx" in namespace "provisioning-4900"
May 29 01:02:11.868: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-wm6lx" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-47-14.ap-northeast-2.compute.internal"
May 29 01:02:12.516: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-vbrw7" in namespace "provisioning-4900" to be "Succeeded or Failed"
May 29 01:02:12.676: INFO: Pod "pvc-volume-tester-reader-vbrw7": Phase="Pending", Reason="", readiness=false. Elapsed: 160.667629ms
May 29 01:02:14.838: INFO: Pod "pvc-volume-tester-reader-vbrw7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32191164s
May 29 01:02:17.001: INFO: Pod "pvc-volume-tester-reader-vbrw7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484838798s
May 29 01:02:19.162: INFO: Pod "pvc-volume-tester-reader-vbrw7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.646057944s
STEP: Saw pod success
May 29 01:02:19.162: INFO: Pod "pvc-volume-tester-reader-vbrw7" satisfied condition "Succeeded or Failed"
May 29 01:02:19.326: INFO: Pod pvc-volume-tester-reader-vbrw7 has the following logs: hello world

May 29 01:02:19.326: INFO: Deleting pod "pvc-volume-tester-reader-vbrw7" in namespace "provisioning-4900"
May 29 01:02:19.492: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-vbrw7" to be fully deleted
May 29 01:02:19.652: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-6m9xs] to have phase Bound
May 29 01:02:19.813: INFO: PersistentVolumeClaim pvc-6m9xs found and phase=Bound (160.767231ms)
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:33.806: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: Destroying namespace "node-problem-detector-7919" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.167 seconds]
[k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:59

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:54
------------------------------
... skipping 30 lines ...
May 29 01:02:08.301: INFO: PersistentVolume nfs-gv9r9 found and phase=Bound (158.441686ms)
May 29 01:02:08.459: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-fdzlw] to have phase Bound
May 29 01:02:08.618: INFO: PersistentVolumeClaim pvc-fdzlw found and phase=Bound (158.709939ms)
STEP: Checking pod has write access to PersistentVolumes
May 29 01:02:08.777: INFO: Creating nfs test pod
May 29 01:02:08.937: INFO: Pod should terminate with exitcode 0 (success)
May 29 01:02:08.937: INFO: Waiting up to 5m0s for pod "pvc-tester-wwd74" in namespace "pv-6837" to be "Succeeded or Failed"
May 29 01:02:09.121: INFO: Pod "pvc-tester-wwd74": Phase="Pending", Reason="", readiness=false. Elapsed: 183.249575ms
May 29 01:02:11.279: INFO: Pod "pvc-tester-wwd74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342001734s
May 29 01:02:13.443: INFO: Pod "pvc-tester-wwd74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.505258632s
STEP: Saw pod success
May 29 01:02:13.443: INFO: Pod "pvc-tester-wwd74" satisfied condition "Succeeded or Failed"
May 29 01:02:13.443: INFO: Pod pvc-tester-wwd74 succeeded 
May 29 01:02:13.443: INFO: Deleting pod "pvc-tester-wwd74" in namespace "pv-6837"
May 29 01:02:13.607: INFO: Wait up to 5m0s for pod "pvc-tester-wwd74" to be fully deleted
May 29 01:02:13.924: INFO: Creating nfs test pod
May 29 01:02:14.083: INFO: Pod should terminate with exitcode 0 (success)
May 29 01:02:14.083: INFO: Waiting up to 5m0s for pod "pvc-tester-726vb" in namespace "pv-6837" to be "Succeeded or Failed"
May 29 01:02:14.241: INFO: Pod "pvc-tester-726vb": Phase="Pending", Reason="", readiness=false. Elapsed: 158.230549ms
May 29 01:02:16.400: INFO: Pod "pvc-tester-726vb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316840077s
May 29 01:02:18.560: INFO: Pod "pvc-tester-726vb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.477053668s
STEP: Saw pod success
May 29 01:02:18.560: INFO: Pod "pvc-tester-726vb" satisfied condition "Succeeded or Failed"
May 29 01:02:18.560: INFO: Pod pvc-tester-726vb succeeded 
May 29 01:02:18.560: INFO: Deleting pod "pvc-tester-726vb" in namespace "pv-6837"
May 29 01:02:18.724: INFO: Wait up to 5m0s for pod "pvc-tester-726vb" to be fully deleted
May 29 01:02:19.042: INFO: Creating nfs test pod
May 29 01:02:19.202: INFO: Pod should terminate with exitcode 0 (success)
May 29 01:02:19.202: INFO: Waiting up to 5m0s for pod "pvc-tester-lflk9" in namespace "pv-6837" to be "Succeeded or Failed"
May 29 01:02:19.360: INFO: Pod "pvc-tester-lflk9": Phase="Pending", Reason="", readiness=false. Elapsed: 158.490562ms
May 29 01:02:21.519: INFO: Pod "pvc-tester-lflk9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317395587s
STEP: Saw pod success
May 29 01:02:21.519: INFO: Pod "pvc-tester-lflk9" satisfied condition "Succeeded or Failed"
May 29 01:02:21.519: INFO: Pod pvc-tester-lflk9 succeeded 
May 29 01:02:21.519: INFO: Deleting pod "pvc-tester-lflk9" in namespace "pv-6837"
May 29 01:02:21.697: INFO: Wait up to 5m0s for pod "pvc-tester-lflk9" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
May 29 01:02:22.176: INFO: Deleting PVC pvc-5kscj to trigger reclamation of PV nfs-tzpps
May 29 01:02:22.176: INFO: Deleting PersistentVolumeClaim "pvc-5kscj"
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [sig-api-machinery] server version
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:02:37.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename server-version
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:02:38.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-8938" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":4,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:38.836: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 136 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:40.173: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 95 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:40.549: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 163 lines ...
May 29 01:01:58.503: INFO: PersistentVolumeClaim csi-hostpath2ptnh found but phase is Pending instead of Bound.
May 29 01:02:00.660: INFO: PersistentVolumeClaim csi-hostpath2ptnh found but phase is Pending instead of Bound.
May 29 01:02:02.818: INFO: PersistentVolumeClaim csi-hostpath2ptnh found but phase is Pending instead of Bound.
May 29 01:02:04.976: INFO: PersistentVolumeClaim csi-hostpath2ptnh found and phase=Bound (32.542715633s)
STEP: Creating pod pod-subpath-test-dynamicpv-6trj
STEP: Creating a pod to test subpath
May 29 01:02:05.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6trj" in namespace "provisioning-8765" to be "Succeeded or Failed"
May 29 01:02:05.607: INFO: Pod "pod-subpath-test-dynamicpv-6trj": Phase="Pending", Reason="", readiness=false. Elapsed: 157.466663ms
May 29 01:02:07.765: INFO: Pod "pod-subpath-test-dynamicpv-6trj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315069904s
May 29 01:02:09.924: INFO: Pod "pod-subpath-test-dynamicpv-6trj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474261337s
May 29 01:02:12.082: INFO: Pod "pod-subpath-test-dynamicpv-6trj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632069094s
May 29 01:02:14.240: INFO: Pod "pod-subpath-test-dynamicpv-6trj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.790686104s
May 29 01:02:16.398: INFO: Pod "pod-subpath-test-dynamicpv-6trj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.948251722s
May 29 01:02:18.556: INFO: Pod "pod-subpath-test-dynamicpv-6trj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.106014978s
STEP: Saw pod success
May 29 01:02:18.556: INFO: Pod "pod-subpath-test-dynamicpv-6trj" satisfied condition "Succeeded or Failed"
May 29 01:02:18.713: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-6trj container test-container-subpath-dynamicpv-6trj: <nil>
STEP: delete the pod
May 29 01:02:19.044: INFO: Waiting for pod pod-subpath-test-dynamicpv-6trj to disappear
May 29 01:02:19.202: INFO: Pod pod-subpath-test-dynamicpv-6trj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-6trj
May 29 01:02:19.202: INFO: Deleting pod "pod-subpath-test-dynamicpv-6trj" in namespace "provisioning-8765"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":8,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:44.169: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 89 lines ...
STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 29 01:02:12.397: INFO: File wheezy_udp@dns-test-service-3.dns-4818.svc.cluster.local from pod  dns-4818/dns-test-6affc6b9-6bd1-431a-bd62-4629ea20e8ef contains 'foo.example.com.
' instead of 'bar.example.com.'
May 29 01:02:12.565: INFO: Lookups using dns-4818/dns-test-6affc6b9-6bd1-431a-bd62-4629ea20e8ef failed for: [wheezy_udp@dns-test-service-3.dns-4818.svc.cluster.local]

May 29 01:02:17.733: INFO: File wheezy_udp@dns-test-service-3.dns-4818.svc.cluster.local from pod  dns-4818/dns-test-6affc6b9-6bd1-431a-bd62-4629ea20e8ef contains 'foo.example.com.
' instead of 'bar.example.com.'
May 29 01:02:17.894: INFO: Lookups using dns-4818/dns-test-6affc6b9-6bd1-431a-bd62-4629ea20e8ef failed for: [wheezy_udp@dns-test-service-3.dns-4818.svc.cluster.local]

May 29 01:02:22.890: INFO: File jessie_udp@dns-test-service-3.dns-4818.svc.cluster.local from pod  dns-4818/dns-test-6affc6b9-6bd1-431a-bd62-4629ea20e8ef contains 'foo.example.com.
' instead of 'bar.example.com.'
May 29 01:02:22.891: INFO: Lookups using dns-4818/dns-test-6affc6b9-6bd1-431a-bd62-4629ea20e8ef failed for: [jessie_udp@dns-test-service-3.dns-4818.svc.cluster.local]

May 29 01:02:27.889: INFO: DNS probes using dns-test-6affc6b9-6bd1-431a-bd62-4629ea20e8ef succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4818.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4818.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
• [SLOW TEST:83.087 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:44.241: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64
[It] should support unsafe sysctls which are actually whitelisted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.163 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support unsafe sysctls which are actually whitelisted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":4,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:44.594: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 179 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:02:46.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-475" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":6,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 135 lines ...
May 29 01:01:55.070: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1508
May 29 01:01:55.227: INFO: creating *v1.StatefulSet: csi-mock-volumes-1508-9260/csi-mockplugin-attacher
May 29 01:01:55.385: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1508"
May 29 01:01:55.541: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1508 to register on node ip-172-20-52-235.ap-northeast-2.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
May 29 01:02:05.813: INFO: Error getting logs for pod inline-volume-rvvhl: the server rejected our request for an unknown reason (get pods inline-volume-rvvhl)
May 29 01:02:05.971: INFO: Deleting pod "inline-volume-rvvhl" in namespace "csi-mock-volumes-1508"
May 29 01:02:06.128: INFO: Wait up to 5m0s for pod "inline-volume-rvvhl" to be fully deleted
STEP: Deleting the previously created pod
May 29 01:02:14.442: INFO: Deleting pod "pvc-volume-tester-zggx9" in namespace "csi-mock-volumes-1508"
May 29 01:02:14.601: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zggx9" to be fully deleted
STEP: Checking CSI driver logs
May 29 01:02:23.079: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1508
May 29 01:02:23.079: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 39406a29-99fe-4f51-a9bf-d20e0e8b1897
May 29 01:02:23.079: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
May 29 01:02:23.079: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
May 29 01:02:23.079: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-zggx9
May 29 01:02:23.079: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-36b65298ed59b33d04c2d26a263d47e7f3fcf3558c5910667fb6eda3619ecd5d","target_path":"/var/lib/kubelet/pods/39406a29-99fe-4f51-a9bf-d20e0e8b1897/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-zggx9
May 29 01:02:23.079: INFO: Deleting pod "pvc-volume-tester-zggx9" in namespace "csi-mock-volumes-1508"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-1508
STEP: Waiting for namespaces [csi-mock-volumes-1508] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:437
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:487
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":9,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:02:52.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-6713" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget","total":-1,"completed":10,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:52.851: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-bdd0eafd-efa8-483a-8b7f-4fac159838ff
STEP: Creating a pod to test consume secrets
May 29 01:02:43.031: INFO: Waiting up to 5m0s for pod "pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98" in namespace "secrets-3273" to be "Succeeded or Failed"
May 29 01:02:43.194: INFO: Pod "pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98": Phase="Pending", Reason="", readiness=false. Elapsed: 162.850943ms
May 29 01:02:45.357: INFO: Pod "pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325705693s
May 29 01:02:47.519: INFO: Pod "pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488560764s
May 29 01:02:49.685: INFO: Pod "pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.653898715s
May 29 01:02:51.848: INFO: Pod "pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817009564s
May 29 01:02:54.011: INFO: Pod "pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.980056783s
STEP: Saw pod success
May 29 01:02:54.011: INFO: Pod "pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98" satisfied condition "Succeeded or Failed"
May 29 01:02:54.175: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98 container secret-volume-test: <nil>
STEP: delete the pod
May 29 01:02:54.518: INFO: Waiting for pod pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98 to disappear
May 29 01:02:54.706: INFO: Pod pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:57.186: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 74 lines ...
• [SLOW TEST:12.703 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:02:59.766: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:00.000: INFO: Only supported for providers [gce gke] (not aws)
... skipping 80 lines ...
• [SLOW TEST:5.771 seconds]
[k8s.io] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:03.003: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 52 lines ...
May 29 01:02:06.755: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6838-aws-sclp6js
STEP: creating a claim
May 29 01:02:06.919: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-rld9
STEP: Creating a pod to test atomic-volume-subpath
May 29 01:02:07.412: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rld9" in namespace "provisioning-6838" to be "Succeeded or Failed"
May 29 01:02:07.575: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Pending", Reason="", readiness=false. Elapsed: 163.192664ms
May 29 01:02:09.740: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327933677s
May 29 01:02:11.903: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.491288791s
May 29 01:02:14.067: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655181623s
May 29 01:02:16.236: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.823735924s
May 29 01:02:18.399: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.987427131s
... skipping 7 lines ...
May 29 01:02:35.713: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Running", Reason="", readiness=true. Elapsed: 28.301398998s
May 29 01:02:37.878: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Running", Reason="", readiness=true. Elapsed: 30.465753451s
May 29 01:02:40.042: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Running", Reason="", readiness=true. Elapsed: 32.629601429s
May 29 01:02:42.206: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Running", Reason="", readiness=true. Elapsed: 34.793846816s
May 29 01:02:44.382: INFO: Pod "pod-subpath-test-dynamicpv-rld9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.970529031s
STEP: Saw pod success
May 29 01:02:44.383: INFO: Pod "pod-subpath-test-dynamicpv-rld9" satisfied condition "Succeeded or Failed"
May 29 01:02:44.546: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-rld9 container test-container-subpath-dynamicpv-rld9: <nil>
STEP: delete the pod
May 29 01:02:44.885: INFO: Waiting for pod pod-subpath-test-dynamicpv-rld9 to disappear
May 29 01:02:45.048: INFO: Pod pod-subpath-test-dynamicpv-rld9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rld9
May 29 01:02:45.048: INFO: Deleting pod "pod-subpath-test-dynamicpv-rld9" in namespace "provisioning-6838"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":33,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:02:55.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward api env vars
May 29 01:02:56.099: INFO: Waiting up to 5m0s for pod "downward-api-458662a5-e089-404e-b612-b0c738953c15" in namespace "downward-api-7117" to be "Succeeded or Failed"
May 29 01:02:56.265: INFO: Pod "downward-api-458662a5-e089-404e-b612-b0c738953c15": Phase="Pending", Reason="", readiness=false. Elapsed: 165.386517ms
May 29 01:02:58.428: INFO: Pod "downward-api-458662a5-e089-404e-b612-b0c738953c15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328437599s
May 29 01:03:00.591: INFO: Pod "downward-api-458662a5-e089-404e-b612-b0c738953c15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.491418085s
May 29 01:03:02.754: INFO: Pod "downward-api-458662a5-e089-404e-b612-b0c738953c15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.654508889s
May 29 01:03:04.918: INFO: Pod "downward-api-458662a5-e089-404e-b612-b0c738953c15": Phase="Pending", Reason="", readiness=false. Elapsed: 8.818317529s
May 29 01:03:07.083: INFO: Pod "downward-api-458662a5-e089-404e-b612-b0c738953c15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.983161206s
STEP: Saw pod success
May 29 01:03:07.083: INFO: Pod "downward-api-458662a5-e089-404e-b612-b0c738953c15" satisfied condition "Succeeded or Failed"
May 29 01:03:07.246: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod downward-api-458662a5-e089-404e-b612-b0c738953c15 container dapi-container: <nil>
STEP: delete the pod
May 29 01:03:07.585: INFO: Waiting for pod downward-api-458662a5-e089-404e-b612-b0c738953c15 to disappear
May 29 01:03:07.748: INFO: Pod downward-api-458662a5-e089-404e-b612-b0c738953c15 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:12.979 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:08.093: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 59 lines ...
• [SLOW TEST:95.944 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently","total":-1,"completed":2,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:09.821: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 88 lines ...
May 29 01:02:56.992: INFO: Waiting for pod aws-client to disappear
May 29 01:02:57.154: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
May 29 01:02:57.154: INFO: Deleting PersistentVolumeClaim "pvc-vh9b6"
May 29 01:02:57.317: INFO: Deleting PersistentVolume "aws-sf7q5"
May 29 01:02:58.265: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-056bdd5c0b4bb73a7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-056bdd5c0b4bb73a7 is currently attached to i-03b9d4204f582e06f
	status code: 400, request id: 0766f367-ba0f-4c57-9bb3-30824fc88d54
May 29 01:03:04.047: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-056bdd5c0b4bb73a7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-056bdd5c0b4bb73a7 is currently attached to i-03b9d4204f582e06f
	status code: 400, request id: 0efc1e9a-e920-4c8b-83a7-af3aded1ab47
May 29 01:03:09.817: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-056bdd5c0b4bb73a7".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:09.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7323" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":7,"skipped":39,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:13.535: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 34 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":9,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:02:47.431: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
May 29 01:03:05.133: INFO: PersistentVolumeClaim pvc-8dhg2 found but phase is Pending instead of Bound.
May 29 01:03:07.294: INFO: PersistentVolumeClaim pvc-8dhg2 found and phase=Bound (10.970104145s)
May 29 01:03:07.294: INFO: Waiting up to 3m0s for PersistentVolume local-nh7t7 to have phase Bound
May 29 01:03:07.457: INFO: PersistentVolume local-nh7t7 found and phase=Bound (162.317902ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ztpv
STEP: Creating a pod to test subpath
May 29 01:03:07.941: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ztpv" in namespace "provisioning-5523" to be "Succeeded or Failed"
May 29 01:03:08.102: INFO: Pod "pod-subpath-test-preprovisionedpv-ztpv": Phase="Pending", Reason="", readiness=false. Elapsed: 161.133231ms
May 29 01:03:10.264: INFO: Pod "pod-subpath-test-preprovisionedpv-ztpv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323018664s
May 29 01:03:12.499: INFO: Pod "pod-subpath-test-preprovisionedpv-ztpv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.558144783s
STEP: Saw pod success
May 29 01:03:12.499: INFO: Pod "pod-subpath-test-preprovisionedpv-ztpv" satisfied condition "Succeeded or Failed"
May 29 01:03:12.670: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-ztpv container test-container-subpath-preprovisionedpv-ztpv: <nil>
STEP: delete the pod
May 29 01:03:13.056: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ztpv to disappear
May 29 01:03:13.218: INFO: Pod pod-subpath-test-preprovisionedpv-ztpv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ztpv
May 29 01:03:13.218: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ztpv" in namespace "provisioning-5523"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 37 lines ...
May 29 01:03:08.420: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9819 explain e2e-test-crd-publish-openapi-3177-crds.spec'
May 29 01:03:09.131: INFO: stderr: ""
May 29 01:03:09.131: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3177-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May 29 01:03:09.131: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9819 explain e2e-test-crd-publish-openapi-3177-crds.spec.bars'
May 29 01:03:09.837: INFO: stderr: ""
May 29 01:03:09.837: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3177-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May 29 01:03:09.838: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9819 explain e2e-test-crd-publish-openapi-3177-crds.spec.bars2'
May 29 01:03:10.514: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:15.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9819" for this suite.
... skipping 2 lines ...
• [SLOW TEST:23.333 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":11,"skipped":47,"failed":0}
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:03:16.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename podtemplate
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:17.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-4603" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":12,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:18.280: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 152 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:217
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":6,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:21.557: INFO: Only supported for providers [openstack] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1094
------------------------------
... skipping 69 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:22.055: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 7 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
May 29 01:03:16.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9" in namespace "downward-api-9831" to be "Succeeded or Failed"
May 29 01:03:16.785: INFO: Pod "downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9": Phase="Pending", Reason="", readiness=false. Elapsed: 169.08465ms
May 29 01:03:18.947: INFO: Pod "downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.33037067s
May 29 01:03:21.108: INFO: Pod "downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.492165339s
STEP: Saw pod success
May 29 01:03:21.108: INFO: Pod "downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9" satisfied condition "Succeeded or Failed"
May 29 01:03:21.270: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9 container client-container: <nil>
STEP: delete the pod
May 29 01:03:21.602: INFO: Waiting for pod downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9 to disappear
May 29 01:03:21.764: INFO: Pod downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.497 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":53,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:22.117: INFO: Only supported for providers [gce gke] (not aws)
... skipping 51 lines ...
• [SLOW TEST:13.308 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:23.156: INFO: Only supported for providers [azure] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277
May 29 01:03:23.102: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-05e0faca-43c8-40ac-86d9-74a7f3b20ee1" in namespace "security-context-test-9740" to be "Succeeded or Failed"
May 29 01:03:23.263: INFO: Pod "busybox-privileged-true-05e0faca-43c8-40ac-86d9-74a7f3b20ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 160.88356ms
May 29 01:03:25.425: INFO: Pod "busybox-privileged-true-05e0faca-43c8-40ac-86d9-74a7f3b20ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.32314927s
May 29 01:03:25.425: INFO: Pod "busybox-privileged-true-05e0faca-43c8-40ac-86d9-74a7f3b20ee1" satisfied condition "Succeeded or Failed"
May 29 01:03:25.590: INFO: Got logs for pod "busybox-privileged-true-05e0faca-43c8-40ac-86d9-74a7f3b20ee1": ""
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:25.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9740" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":12,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 163 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":9,"skipped":95,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:26.438: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 356 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:28.400: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
STEP: Creating a kubernetes client
May 29 01:03:22.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:145
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:778
STEP: creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
May 29 01:03:23.045: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 29 01:03:29.367: INFO: deleting claim "volume-provisioning-9457"/"pvc-m6lzc"
May 29 01:03:29.529: INFO: deleting storage class volume-provisioning-9457-invalid-aws
... skipping 5 lines ...

• [SLOW TEST:7.947 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:777
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:778
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":6,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
May 29 01:03:28.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-162695de-7685-437d-9608-d24b82d70a94" in namespace "downward-api-3608" to be "Succeeded or Failed"
May 29 01:03:28.237: INFO: Pod "downwardapi-volume-162695de-7685-437d-9608-d24b82d70a94": Phase="Pending", Reason="", readiness=false. Elapsed: 161.741629ms
May 29 01:03:30.403: INFO: Pod "downwardapi-volume-162695de-7685-437d-9608-d24b82d70a94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.327810251s
STEP: Saw pod success
May 29 01:03:30.403: INFO: Pod "downwardapi-volume-162695de-7685-437d-9608-d24b82d70a94" satisfied condition "Succeeded or Failed"
May 29 01:03:30.570: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod downwardapi-volume-162695de-7685-437d-9608-d24b82d70a94 container client-container: <nil>
STEP: delete the pod
May 29 01:03:31.035: INFO: Waiting for pod downwardapi-volume-162695de-7685-437d-9608-d24b82d70a94 to disappear
May 29 01:03:31.199: INFO: Pod downwardapi-volume-162695de-7685-437d-9608-d24b82d70a94 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:31.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3608" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":64,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:31.566: INFO: Only supported for providers [openstack] (not aws)
... skipping 182 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:241
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:32.898: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 62 lines ...
• [SLOW TEST:19.519 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":9,"skipped":50,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:33.102: INFO: Only supported for providers [openstack] (not aws)
... skipping 30 lines ...
May 29 01:03:03.838: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
May 29 01:03:04.868: INFO: Successfully created a new PD: "aws://ap-northeast-2a/vol-0eeac36c9030e0119".
May 29 01:03:04.868: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-ztll
STEP: Creating a pod to test exec-volume-test
May 29 01:03:05.030: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-ztll" in namespace "volume-554" to be "Succeeded or Failed"
May 29 01:03:05.190: INFO: Pod "exec-volume-test-inlinevolume-ztll": Phase="Pending", Reason="", readiness=false. Elapsed: 159.762139ms
May 29 01:03:07.350: INFO: Pod "exec-volume-test-inlinevolume-ztll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319432545s
May 29 01:03:09.509: INFO: Pod "exec-volume-test-inlinevolume-ztll": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478968937s
May 29 01:03:11.683: INFO: Pod "exec-volume-test-inlinevolume-ztll": Phase="Pending", Reason="", readiness=false. Elapsed: 6.652617462s
May 29 01:03:13.843: INFO: Pod "exec-volume-test-inlinevolume-ztll": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812987487s
May 29 01:03:16.003: INFO: Pod "exec-volume-test-inlinevolume-ztll": Phase="Pending", Reason="", readiness=false. Elapsed: 10.972628334s
May 29 01:03:18.164: INFO: Pod "exec-volume-test-inlinevolume-ztll": Phase="Pending", Reason="", readiness=false. Elapsed: 13.134166702s
May 29 01:03:20.324: INFO: Pod "exec-volume-test-inlinevolume-ztll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.2938742s
STEP: Saw pod success
May 29 01:03:20.324: INFO: Pod "exec-volume-test-inlinevolume-ztll" satisfied condition "Succeeded or Failed"
May 29 01:03:20.484: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod exec-volume-test-inlinevolume-ztll container exec-container-inlinevolume-ztll: <nil>
STEP: delete the pod
May 29 01:03:20.816: INFO: Waiting for pod exec-volume-test-inlinevolume-ztll to disappear
May 29 01:03:20.976: INFO: Pod exec-volume-test-inlinevolume-ztll no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-ztll
May 29 01:03:20.976: INFO: Deleting pod "exec-volume-test-inlinevolume-ztll" in namespace "volume-554"
May 29 01:03:21.377: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0eeac36c9030e0119", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0eeac36c9030e0119 is currently attached to i-0a4e2805a6c116cdf
	status code: 400, request id: d8f8d145-4a7b-4726-bd04-313e76eed9e4
May 29 01:03:27.209: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0eeac36c9030e0119", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0eeac36c9030e0119 is currently attached to i-0a4e2805a6c116cdf
	status code: 400, request id: a6fde9bb-4fec-4e8c-94c4-8f46ecdee5d7
May 29 01:03:32.978: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-0eeac36c9030e0119".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:32.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-554" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:33.318: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 45 lines ...
• [SLOW TEST:10.171 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:33.348: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 155 lines ...
• [SLOW TEST:65.400 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:129
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":9,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:36.594: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Creating a kubernetes client
May 29 01:03:30.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
May 29 01:03:30.914: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:36.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4049" for this suite.


• [SLOW TEST:6.815 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":7,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:36.869: INFO: Only supported for providers [azure] (not aws)
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 95 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
May 29 01:03:34.339: INFO: Waiting up to 5m0s for pod "metadata-volume-c9fc799c-6813-4d49-beda-828d3f7cc357" in namespace "projected-7819" to be "Succeeded or Failed"
May 29 01:03:34.499: INFO: Pod "metadata-volume-c9fc799c-6813-4d49-beda-828d3f7cc357": Phase="Pending", Reason="", readiness=false. Elapsed: 160.664287ms
May 29 01:03:36.669: INFO: Pod "metadata-volume-c9fc799c-6813-4d49-beda-828d3f7cc357": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.32988839s
STEP: Saw pod success
May 29 01:03:36.669: INFO: Pod "metadata-volume-c9fc799c-6813-4d49-beda-828d3f7cc357" satisfied condition "Succeeded or Failed"
May 29 01:03:36.829: INFO: Trying to get logs from node ip-172-20-47-14.ap-northeast-2.compute.internal pod metadata-volume-c9fc799c-6813-4d49-beda-828d3f7cc357 container client-container: <nil>
STEP: delete the pod
May 29 01:03:37.163: INFO: Waiting for pod metadata-volume-c9fc799c-6813-4d49-beda-828d3f7cc357 to disappear
May 29 01:03:37.324: INFO: Pod metadata-volume-c9fc799c-6813-4d49-beda-828d3f7cc357 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 30 lines ...
• [SLOW TEST:9.204 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":48,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":8,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:37.667: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 10 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1440
------------------------------
... skipping 262 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":10,"skipped":127,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:39.929: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-projected-all-test-volume-1c8aed67-08c1-453f-9378-eb901d1ff16f
STEP: Creating secret with name secret-projected-all-test-volume-96bacea1-a089-4e62-8b73-1325cce07b52
STEP: Creating a pod to test Check all projections for projected volume plugin
May 29 01:03:37.912: INFO: Waiting up to 5m0s for pod "projected-volume-38912b57-236e-4bc0-bfb2-476ca21cac2a" in namespace "projected-6032" to be "Succeeded or Failed"
May 29 01:03:38.079: INFO: Pod "projected-volume-38912b57-236e-4bc0-bfb2-476ca21cac2a": Phase="Pending", Reason="", readiness=false. Elapsed: 167.607132ms
May 29 01:03:40.242: INFO: Pod "projected-volume-38912b57-236e-4bc0-bfb2-476ca21cac2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.330828114s
STEP: Saw pod success
May 29 01:03:40.242: INFO: Pod "projected-volume-38912b57-236e-4bc0-bfb2-476ca21cac2a" satisfied condition "Succeeded or Failed"
May 29 01:03:40.405: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod projected-volume-38912b57-236e-4bc0-bfb2-476ca21cac2a container projected-all-volume-test: <nil>
STEP: delete the pod
May 29 01:03:40.760: INFO: Waiting for pod projected-volume-38912b57-236e-4bc0-bfb2-476ca21cac2a to disappear
May 29 01:03:40.923: INFO: Pod projected-volume-38912b57-236e-4bc0-bfb2-476ca21cac2a no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:40.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6032" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:03:38.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-17d51761-012b-4f34-b84a-5ee04db035c7
STEP: Creating a pod to test consume secrets
May 29 01:03:40.085: INFO: Waiting up to 5m0s for pod "pod-secrets-2ae4f39d-ab68-48ec-bd6e-c2082ac7b40a" in namespace "secrets-8808" to be "Succeeded or Failed"
May 29 01:03:40.247: INFO: Pod "pod-secrets-2ae4f39d-ab68-48ec-bd6e-c2082ac7b40a": Phase="Pending", Reason="", readiness=false. Elapsed: 162.317196ms
May 29 01:03:42.413: INFO: Pod "pod-secrets-2ae4f39d-ab68-48ec-bd6e-c2082ac7b40a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.328518887s
STEP: Saw pod success
May 29 01:03:42.413: INFO: Pod "pod-secrets-2ae4f39d-ab68-48ec-bd6e-c2082ac7b40a" satisfied condition "Succeeded or Failed"
May 29 01:03:42.580: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-secrets-2ae4f39d-ab68-48ec-bd6e-c2082ac7b40a container secret-volume-test: <nil>
STEP: delete the pod
May 29 01:03:42.924: INFO: Waiting for pod pod-secrets-2ae4f39d-ab68-48ec-bd6e-c2082ac7b40a to disappear
May 29 01:03:43.087: INFO: Pod pod-secrets-2ae4f39d-ab68-48ec-bd6e-c2082ac7b40a no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:43.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8808" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-windows] Windows volume mounts 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
May 29 01:03:43.435: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 42 lines ...
May 29 01:03:36.267: INFO: PersistentVolumeClaim pvc-8h675 found but phase is Pending instead of Bound.
May 29 01:03:38.426: INFO: PersistentVolumeClaim pvc-8h675 found and phase=Bound (15.255794838s)
May 29 01:03:38.426: INFO: Waiting up to 3m0s for PersistentVolume local-r299q to have phase Bound
May 29 01:03:38.582: INFO: PersistentVolume local-r299q found and phase=Bound (156.578944ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pxg8
STEP: Creating a pod to test subpath
May 29 01:03:39.054: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pxg8" in namespace "provisioning-5983" to be "Succeeded or Failed"
May 29 01:03:39.212: INFO: Pod "pod-subpath-test-preprovisionedpv-pxg8": Phase="Pending", Reason="", readiness=false. Elapsed: 158.4447ms
May 29 01:03:41.369: INFO: Pod "pod-subpath-test-preprovisionedpv-pxg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315098764s
May 29 01:03:43.526: INFO: Pod "pod-subpath-test-preprovisionedpv-pxg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.472056952s
STEP: Saw pod success
May 29 01:03:43.526: INFO: Pod "pod-subpath-test-preprovisionedpv-pxg8" satisfied condition "Succeeded or Failed"
May 29 01:03:43.683: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-pxg8 container test-container-volume-preprovisionedpv-pxg8: <nil>
STEP: delete the pod
May 29 01:03:44.013: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pxg8 to disappear
May 29 01:03:44.174: INFO: Pod pod-subpath-test-preprovisionedpv-pxg8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pxg8
May 29 01:03:44.174: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pxg8" in namespace "provisioning-5983"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":13,"skipped":58,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:46.643: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 164 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 54 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
May 29 01:03:42.093: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 29 01:03:42.259: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-hc2x
STEP: Creating a pod to test subpath
May 29 01:03:42.429: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-hc2x" in namespace "provisioning-4675" to be "Succeeded or Failed"
May 29 01:03:42.599: INFO: Pod "pod-subpath-test-inlinevolume-hc2x": Phase="Pending", Reason="", readiness=false. Elapsed: 169.312228ms
May 29 01:03:44.765: INFO: Pod "pod-subpath-test-inlinevolume-hc2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335981881s
May 29 01:03:46.936: INFO: Pod "pod-subpath-test-inlinevolume-hc2x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.506348912s
STEP: Saw pod success
May 29 01:03:46.936: INFO: Pod "pod-subpath-test-inlinevolume-hc2x" satisfied condition "Succeeded or Failed"
May 29 01:03:47.098: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-hc2x container test-container-subpath-inlinevolume-hc2x: <nil>
STEP: delete the pod
May 29 01:03:47.432: INFO: Waiting for pod pod-subpath-test-inlinevolume-hc2x to disappear
May 29 01:03:47.595: INFO: Pod pod-subpath-test-inlinevolume-hc2x no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-hc2x
May 29 01:03:47.595: INFO: Deleting pod "pod-subpath-test-inlinevolume-hc2x" in namespace "provisioning-4675"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":11,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:03:43.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124
May 29 01:03:44.448: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-4386" to be "Succeeded or Failed"
May 29 01:03:44.645: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 196.856246ms
May 29 01:03:46.814: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366307129s
May 29 01:03:48.977: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.529213747s
May 29 01:03:48.977: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:49.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4386" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":8,"skipped":52,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:49.520: INFO: Only supported for providers [vsphere] (not aws)
... skipping 125 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:666
    should expand volume without restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:681
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":5,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:52.015: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 145 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":5,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:55.531: INFO: Driver local doesn't support ext3 -- skipping
... skipping 51 lines ...
May 29 01:02:27.903: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1999-aws-scsk2rj
STEP: creating a claim
May 29 01:02:28.067: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-zcwk
STEP: Creating a pod to test subpath
May 29 01:02:28.552: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zcwk" in namespace "provisioning-1999" to be "Succeeded or Failed"
May 29 01:02:28.720: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 167.446587ms
May 29 01:02:30.880: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32821963s
May 29 01:02:33.041: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488749397s
May 29 01:02:35.201: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648656665s
May 29 01:02:37.361: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.808689293s
May 29 01:02:39.521: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.968604595s
... skipping 6 lines ...
May 29 01:02:54.693: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 26.140460455s
May 29 01:02:56.853: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 28.300711337s
May 29 01:02:59.013: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 30.460686918s
May 29 01:03:01.173: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 32.620634805s
May 29 01:03:03.333: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.780869038s
STEP: Saw pod success
May 29 01:03:03.333: INFO: Pod "pod-subpath-test-dynamicpv-zcwk" satisfied condition "Succeeded or Failed"
May 29 01:03:03.493: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-zcwk container test-container-subpath-dynamicpv-zcwk: <nil>
STEP: delete the pod
May 29 01:03:03.822: INFO: Waiting for pod pod-subpath-test-dynamicpv-zcwk to disappear
May 29 01:03:03.982: INFO: Pod pod-subpath-test-dynamicpv-zcwk no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zcwk
May 29 01:03:03.982: INFO: Deleting pod "pod-subpath-test-dynamicpv-zcwk" in namespace "provisioning-1999"
STEP: Creating pod pod-subpath-test-dynamicpv-zcwk
STEP: Creating a pod to test subpath
May 29 01:03:04.305: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zcwk" in namespace "provisioning-1999" to be "Succeeded or Failed"
May 29 01:03:04.464: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 159.437522ms
May 29 01:03:06.624: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3191746s
May 29 01:03:08.788: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482934344s
May 29 01:03:10.994: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.688829823s
May 29 01:03:13.156: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.851642425s
May 29 01:03:15.320: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.015281774s
May 29 01:03:17.481: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.176553377s
May 29 01:03:19.643: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.338051487s
May 29 01:03:21.804: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 17.499350689s
May 29 01:03:23.966: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.660814745s
May 29 01:03:26.129: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Pending", Reason="", readiness=false. Elapsed: 21.824164113s
May 29 01:03:28.290: INFO: Pod "pod-subpath-test-dynamicpv-zcwk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.985463822s
STEP: Saw pod success
May 29 01:03:28.290: INFO: Pod "pod-subpath-test-dynamicpv-zcwk" satisfied condition "Succeeded or Failed"
May 29 01:03:28.452: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-zcwk container test-container-subpath-dynamicpv-zcwk: <nil>
STEP: delete the pod
May 29 01:03:28.789: INFO: Waiting for pod pod-subpath-test-dynamicpv-zcwk to disappear
May 29 01:03:28.950: INFO: Pod pod-subpath-test-dynamicpv-zcwk no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zcwk
May 29 01:03:28.950: INFO: Deleting pod "pod-subpath-test-dynamicpv-zcwk" in namespace "provisioning-1999"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:56.522: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 110 lines ...
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-7587, will wait for the garbage collector to delete the pods
May 29 01:02:40.841: INFO: Deleting ReplicationController up-down-1 took: 160.279722ms
May 29 01:02:40.941: INFO: Terminating ReplicationController up-down-1 pods took: 100.168019ms
STEP: verifying service up-down-1 is not up
May 29 01:02:56.325: INFO: Creating new host exec pod
May 29 01:03:04.809: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7587 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.253.172:80 && echo service-down-failed'
May 29 01:03:08.455: INFO: rc: 28
May 29 01:03:08.455: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.66.253.172:80 && echo service-down-failed" in pod services-7587/verify-service-down-host-exec-pod: error running /tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7587 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.253.172:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.66.253.172:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7587
STEP: verifying service up-down-2 is still up
May 29 01:03:08.619: INFO: Creating new host exec pod
May 29 01:03:13.107: INFO: Creating new exec pod
... skipping 53 lines ...
• [SLOW TEST:110.091 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1025
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":5,"skipped":76,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:58.507: INFO: Only supported for providers [gce gke] (not aws)
... skipping 63 lines ...
• [SLOW TEST:49.576 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":8,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:59.752: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 40 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:03:48.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:03:59.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5558" for this suite.


• [SLOW TEST:11.627 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":12,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:03:59.904: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 165 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":14,"skipped":93,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 73 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:01.545: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 37 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":8,"skipped":63,"failed":0}
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:04:01.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
May 29 01:03:51.196: INFO: PersistentVolumeClaim pvc-8hkc5 found but phase is Pending instead of Bound.
May 29 01:03:53.364: INFO: PersistentVolumeClaim pvc-8hkc5 found and phase=Bound (15.366995604s)
May 29 01:03:53.364: INFO: Waiting up to 3m0s for PersistentVolume local-mkmcj to have phase Bound
May 29 01:03:53.530: INFO: PersistentVolume local-mkmcj found and phase=Bound (165.866813ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-khr4
STEP: Creating a pod to test subpath
May 29 01:03:54.115: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-khr4" in namespace "provisioning-8624" to be "Succeeded or Failed"
May 29 01:03:54.378: INFO: Pod "pod-subpath-test-preprovisionedpv-khr4": Phase="Pending", Reason="", readiness=false. Elapsed: 263.64943ms
May 29 01:03:56.544: INFO: Pod "pod-subpath-test-preprovisionedpv-khr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429620955s
May 29 01:03:58.766: INFO: Pod "pod-subpath-test-preprovisionedpv-khr4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.651322898s
STEP: Saw pod success
May 29 01:03:58.766: INFO: Pod "pod-subpath-test-preprovisionedpv-khr4" satisfied condition "Succeeded or Failed"
May 29 01:03:59.025: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-khr4 container test-container-subpath-preprovisionedpv-khr4: <nil>
STEP: delete the pod
May 29 01:03:59.582: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-khr4 to disappear
May 29 01:03:59.798: INFO: Pod pod-subpath-test-preprovisionedpv-khr4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-khr4
May 29 01:03:59.798: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-khr4" in namespace "provisioning-8624"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:02.751: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 96 lines ...
• [SLOW TEST:30.680 seconds]
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":7,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:04.031: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 44 lines ...
May 29 01:04:01.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 29 01:04:02.111: INFO: Waiting up to 5m0s for pod "pod-9f805509-bd8d-47be-a42a-e7fed15b6e96" in namespace "emptydir-5950" to be "Succeeded or Failed"
May 29 01:04:02.278: INFO: Pod "pod-9f805509-bd8d-47be-a42a-e7fed15b6e96": Phase="Pending", Reason="", readiness=false. Elapsed: 166.369307ms
May 29 01:04:04.435: INFO: Pod "pod-9f805509-bd8d-47be-a42a-e7fed15b6e96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.323423726s
STEP: Saw pod success
May 29 01:04:04.435: INFO: Pod "pod-9f805509-bd8d-47be-a42a-e7fed15b6e96" satisfied condition "Succeeded or Failed"
May 29 01:04:04.591: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-9f805509-bd8d-47be-a42a-e7fed15b6e96 container test-container: <nil>
STEP: delete the pod
May 29 01:04:04.952: INFO: Waiting for pod pod-9f805509-bd8d-47be-a42a-e7fed15b6e96 to disappear
May 29 01:04:05.123: INFO: Pod pod-9f805509-bd8d-47be-a42a-e7fed15b6e96 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:05.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5950" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":95,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:04:02.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
May 29 01:04:03.581: INFO: Waiting up to 5m0s for pod "security-context-c972ba99-bd5c-40f2-bfd4-1945b65c6e4a" in namespace "security-context-7320" to be "Succeeded or Failed"
May 29 01:04:03.742: INFO: Pod "security-context-c972ba99-bd5c-40f2-bfd4-1945b65c6e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 160.828996ms
May 29 01:04:05.903: INFO: Pod "security-context-c972ba99-bd5c-40f2-bfd4-1945b65c6e4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321705946s
STEP: Saw pod success
May 29 01:04:05.903: INFO: Pod "security-context-c972ba99-bd5c-40f2-bfd4-1945b65c6e4a" satisfied condition "Succeeded or Failed"
May 29 01:04:06.061: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod security-context-c972ba99-bd5c-40f2-bfd4-1945b65c6e4a container test-container: <nil>
STEP: delete the pod
May 29 01:04:06.390: INFO: Waiting for pod security-context-c972ba99-bd5c-40f2-bfd4-1945b65c6e4a to disappear
May 29 01:04:06.575: INFO: Pod security-context-c972ba99-bd5c-40f2-bfd4-1945b65c6e4a no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:06.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-7320" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":9,"skipped":68,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:06.940: INFO: Only supported for providers [azure] (not aws)
... skipping 245 lines ...
• [SLOW TEST:67.842 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should not be ready with a docker exec readiness probe timeout 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233
------------------------------
{"msg":"PASSED [k8s.io] Probing container should not be ready with a docker exec readiness probe timeout ","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:07.675: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 35 lines ...
• [SLOW TEST:7.185 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":63,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:08.774: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 187 lines ...
May 29 01:03:55.516: INFO: Waiting for pod aws-client to disappear
May 29 01:03:55.706: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
May 29 01:03:55.706: INFO: Deleting PersistentVolumeClaim "pvc-cwhmw"
May 29 01:03:56.039: INFO: Deleting PersistentVolume "aws-jjhm9"
May 29 01:03:57.036: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-026f7a2a58dfead0a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-026f7a2a58dfead0a is currently attached to i-0a4e2805a6c116cdf
	status code: 400, request id: a9d3bbca-57e1-4f38-a9c2-3b2ec7eafdfa
May 29 01:04:02.792: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-026f7a2a58dfead0a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-026f7a2a58dfead0a is currently attached to i-0a4e2805a6c116cdf
	status code: 400, request id: 973692dd-a69b-4f0c-8e35-7a46a557e27d
May 29 01:04:08.622: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-026f7a2a58dfead0a".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:08.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3276" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":25,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
... skipping 9 lines ...
May 29 01:03:38.480: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-663-aws-scmgjqq
STEP: creating a claim
May 29 01:03:38.653: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-mn7b
STEP: Creating a pod to test exec-volume-test
May 29 01:03:39.130: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-mn7b" in namespace "volume-663" to be "Succeeded or Failed"
May 29 01:03:39.289: INFO: Pod "exec-volume-test-dynamicpv-mn7b": Phase="Pending", Reason="", readiness=false. Elapsed: 158.091054ms
May 29 01:03:41.449: INFO: Pod "exec-volume-test-dynamicpv-mn7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318482001s
May 29 01:03:43.608: INFO: Pod "exec-volume-test-dynamicpv-mn7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477791179s
May 29 01:03:45.770: INFO: Pod "exec-volume-test-dynamicpv-mn7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.639346312s
May 29 01:03:47.928: INFO: Pod "exec-volume-test-dynamicpv-mn7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.797690283s
May 29 01:03:50.087: INFO: Pod "exec-volume-test-dynamicpv-mn7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.956061566s
May 29 01:03:52.258: INFO: Pod "exec-volume-test-dynamicpv-mn7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.127121876s
STEP: Saw pod success
May 29 01:03:52.258: INFO: Pod "exec-volume-test-dynamicpv-mn7b" satisfied condition "Succeeded or Failed"
May 29 01:03:52.417: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod exec-volume-test-dynamicpv-mn7b container exec-container-dynamicpv-mn7b: <nil>
STEP: delete the pod
May 29 01:03:52.853: INFO: Waiting for pod exec-volume-test-dynamicpv-mn7b to disappear
May 29 01:03:53.056: INFO: Pod exec-volume-test-dynamicpv-mn7b no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-mn7b
May 29 01:03:53.057: INFO: Deleting pod "exec-volume-test-dynamicpv-mn7b" in namespace "volume-663"
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:04:05.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-a4074c15-4eba-4960-98be-55b15ec4f5bc
STEP: Creating a pod to test consume configMaps
May 29 01:04:06.588: INFO: Waiting up to 5m0s for pod "pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85" in namespace "configmap-6428" to be "Succeeded or Failed"
May 29 01:04:06.745: INFO: Pod "pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85": Phase="Pending", Reason="", readiness=false. Elapsed: 156.493944ms
May 29 01:04:08.903: INFO: Pod "pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315228611s
May 29 01:04:11.060: INFO: Pod "pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471972492s
May 29 01:04:13.217: INFO: Pod "pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.628693735s
STEP: Saw pod success
May 29 01:04:13.217: INFO: Pod "pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85" satisfied condition "Succeeded or Failed"
May 29 01:04:13.375: INFO: Trying to get logs from node ip-172-20-58-248.ap-northeast-2.compute.internal pod pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85 container agnhost-container: <nil>
STEP: delete the pod
May 29 01:04:13.708: INFO: Waiting for pod pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85 to disappear
May 29 01:04:13.864: INFO: Pod pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.718 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":99,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
May 29 01:04:08.691: INFO: Waiting up to 5m0s for pod "pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae" in namespace "emptydir-8524" to be "Succeeded or Failed"
May 29 01:04:08.856: INFO: Pod "pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae": Phase="Pending", Reason="", readiness=false. Elapsed: 164.428915ms
May 29 01:04:11.020: INFO: Pod "pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328855933s
May 29 01:04:13.188: INFO: Pod "pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.496630735s
STEP: Saw pod success
May 29 01:04:13.188: INFO: Pod "pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae" satisfied condition "Succeeded or Failed"
May 29 01:04:13.352: INFO: Trying to get logs from node ip-172-20-33-144.ap-northeast-2.compute.internal pod pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae container test-container: <nil>
STEP: delete the pod
May 29 01:04:13.699: INFO: Waiting for pod pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae to disappear
May 29 01:04:13.864: INFO: Pod pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:71
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":7,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:14.203: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 47 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:04:13.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
May 29 01:04:14.162: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:ap-northeast-2a]
May 29 01:04:14.162: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
May 29 01:04:14.162: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 201 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":13,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:18.815: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 219 lines ...
• [SLOW TEST:23.594 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:778
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":6,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:7.061 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":110,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:21.306: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":10,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:26.035: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 119 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:310
    should preserve attachment policy when no CSIDriver present
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:332
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":14,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:32.060: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
May 29 01:04:21.039: INFO: PersistentVolumeClaim pvc-x8t8g found but phase is Pending instead of Bound.
May 29 01:04:23.201: INFO: PersistentVolumeClaim pvc-x8t8g found and phase=Bound (13.135110707s)
May 29 01:04:23.201: INFO: Waiting up to 3m0s for PersistentVolume local-wtx5c to have phase Bound
May 29 01:04:23.363: INFO: PersistentVolume local-wtx5c found and phase=Bound (161.470003ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6jfh
STEP: Creating a pod to test exec-volume-test
May 29 01:04:23.851: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6jfh" in namespace "volume-6974" to be "Succeeded or Failed"
May 29 01:04:24.013: INFO: Pod "exec-volume-test-preprovisionedpv-6jfh": Phase="Pending", Reason="", readiness=false. Elapsed: 162.236184ms
May 29 01:04:26.177: INFO: Pod "exec-volume-test-preprovisionedpv-6jfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.326383269s
STEP: Saw pod success
May 29 01:04:26.177: INFO: Pod "exec-volume-test-preprovisionedpv-6jfh" satisfied condition "Succeeded or Failed"
May 29 01:04:26.339: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-6jfh container exec-container-preprovisionedpv-6jfh: <nil>
STEP: delete the pod
May 29 01:04:26.680: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6jfh to disappear
May 29 01:04:26.842: INFO: Pod exec-volume-test-preprovisionedpv-6jfh no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6jfh
May 29 01:04:26.842: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6jfh" in namespace "volume-6974"
... skipping 75 lines ...
• [SLOW TEST:22.538 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1974
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":10,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:32.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":11,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:33.038: INFO: Driver local doesn't support ext4 -- skipping
... skipping 72 lines ...
• [SLOW TEST:7.368 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":11,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:33.430: INFO: Only supported for providers [azure] (not aws)
... skipping 196 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:33.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9819" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":18,"skipped":117,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:04:23.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
May 29 01:04:27.491: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 29 01:04:27.491: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6296 describe pod agnhost-primary-2dzpw'
May 29 01:04:28.436: INFO: stderr: ""
May 29 01:04:28.436: INFO: stdout: "Name:         agnhost-primary-2dzpw\nNamespace:    kubectl-6296\nPriority:     0\nNode:         ip-172-20-58-248.ap-northeast-2.compute.internal/172.20.58.248\nStart Time:   Sat, 29 May 2021 01:04:25 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.4.79\nIPs:\n  IP:           100.96.4.79\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   docker://63fc8504484efe1f68c061d85a27f7901c353ca62798fcf6c8dec38a5c6c8ab4\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Image ID:       docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 29 May 2021 01:04:26 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mgzjd (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-mgzjd:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-mgzjd\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-6296/agnhost-primary-2dzpw to ip-172-20-58-248.ap-northeast-2.compute.internal\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    2s    kubelet            Started container agnhost-primary\n"
May 29 01:04:28.436: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6296 describe rc agnhost-primary'
May 29 01:04:29.509: INFO: stderr: ""
May 29 01:04:29.509: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-6296\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.21\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-primary-2dzpw\n"
May 29 01:04:29.509: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6296 describe service agnhost-primary'
May 29 01:04:30.574: INFO: stderr: ""
May 29 01:04:30.574: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-6296\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Families:       <none>\nIP:                100.67.185.38\nIPs:               100.67.185.38\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.4.79:6379\nSession Affinity:  None\nEvents:            <none>\n"
May 29 01:04:30.732: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6296 describe node ip-172-20-33-144.ap-northeast-2.compute.internal'
May 29 01:04:32.494: INFO: stderr: ""
May 29 01:04:32.494: INFO: stdout: "Name:               ip-172-20-33-144.ap-northeast-2.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=ap-northeast-2\n                    failure-domain.beta.kubernetes.io/zone=ap-northeast-2a\n                    kops.k8s.io/instancegroup=nodes-ap-northeast-2a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-33-144.ap-northeast-2.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.hostpath.csi/node=ip-172-20-33-144.ap-northeast-2.compute.internal\n                    topology.kubernetes.io/region=ap-northeast-2\n                    topology.kubernetes.io/zone=ap-northeast-2a\nAnnotations:        csi.volume.kubernetes.io/nodeid:\n                      {\"csi-hostpath-volume-1313\":\"ip-172-20-33-144.ap-northeast-2.compute.internal\",\"csi-mock-csi-mock-volumes-6298\":\"csi-mock-csi-mock-volumes...\n                    io.cilium.network.ipv4-cilium-host: 100.96.1.108\n                    io.cilium.network.ipv4-health-ip: 100.96.1.166\n                    io.cilium.network.ipv4-pod-cidr: 100.96.1.0/24\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 29 May 2021 00:56:30 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-33-144.ap-northeast-2.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 29 May 2021 01:04:30 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 29 May 2021 00:56:59 +0000   Sat, 29 May 2021 00:56:59 +0000   CiliumIsUp                   Cilium is running on this node\n  MemoryPressure       False   Sat, 29 May 2021 01:04:22 +0000   Sat, 29 May 2021 00:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 29 May 2021 01:04:22 +0000   Sat, 29 May 2021 00:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 29 May 2021 01:04:22 +0000   Sat, 29 May 2021 00:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 29 May 2021 01:04:22 +0000   Sat, 29 May 2021 00:56:50 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:   172.20.33.144\n  ExternalIP:   13.124.122.160\n  Hostname:     ip-172-20-33-144.ap-northeast-2.compute.internal\n  InternalDNS:  ip-172-20-33-144.ap-northeast-2.compute.internal\n  ExternalDNS:  ec2-13-124-122-160.ap-northeast-2.compute.amazonaws.com\nCapacity:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           50319340Ki\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3766452Ki\n  pods:                        110\nAllocatable:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           46374303668\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3664052Ki\n  pods:                        110\nSystem Info:\n  Machine ID:                         ec2e132c5b47cd4f39eee277bc3fd461\n  System UUID:                        ec2e132c-5b47-cd4f-39ee-e277bc3fd461\n  Boot ID:                            7cec1073-ebc8-458b-8a4e-b664029c953f\n  Kernel Version:                     4.18.0-240.15.1.el8_3.x86_64\n  OS Image:                           Red Hat Enterprise Linux 8.3 (Ootpa)\n  Operating System:                   linux\n  Architecture:                       amd64\n  Container Runtime Version:          docker://19.3.15\n  Kubelet Version:                    v1.20.7\n  Kube-Proxy Version:                 v1.20.7\nPodCIDR:                              100.96.1.0/24\nPodCIDRs:                             100.96.1.0/24\nProviderID:                           aws:///ap-northeast-2a/i-0a4e2805a6c116cdf\nNon-terminated Pods:                  (17 in total)\n  Namespace                           Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                           ----                                                               ------------  ----------  ---------------  -------------  ---\n  container-probe-6004                startup-6e4aba13-659a-4863-af3d-1190b6ea96fd                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s\n  csi-mock-volumes-6298-4374          csi-mockplugin-0                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s\n  csi-mock-volumes-6298-4374          csi-mockplugin-attacher-0                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s\n  deployment-4317                     test-cleanup-deployment-685c4f8568-kf4b4                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s\n  kube-system                         cilium-hjh65                                                       100m (5%)     0 (0%)      128Mi (3%)       100Mi (2%)     8m2s\n  kube-system                         coredns-8f5559c9b-pxvkg                                            100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m11s\n  kube-system                         coredns-autoscaler-6f594f4c58-6t684                                20m (1%)      0 (0%)      10Mi (0%)        0 (0%)         9m11s\n  kubectl-5003                        failure-3                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s\n  nettest-6059                        netserver-0                                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s\n  persistent-local-volumes-test-6122  hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-sllbm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\n  persistent-local-volumes-test-6122  pod-46c49e57-4de8-42c2-bd72-9cb23fbd7c37                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s\n  pod-network-test-8333               host-test-container-pod                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s\n  pod-network-test-8333               netserver-0                                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s\n  port-forwarding-4560                pfpod                                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s\n  services-8404                       service-headless-toggled-5qq8l                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s\n  services-8404                       service-headless-wzkl2                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s\n  services-8404                       verify-service-up-host-exec-pod                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests    Limits\n  --------                    --------    ------\n  cpu                         220m (11%)  0 (0%)\n  memory                      208Mi (5%)  270Mi (7%)\n  ephemeral-storage           0 (0%)      0 (0%)\n  hugepages-1Gi               0 (0%)      0 (0%)\n  hugepages-2Mi               0 (0%)      0 (0%)\n  attachable-volumes-aws-ebs  0           0\nEvents:\n  Type    Reason                   Age    From     Message\n  ----    ------                   ----   ----     -------\n  Normal  Starting                 8m2s   kubelet  Starting kubelet.\n  Normal  NodeHasSufficientMemory  8m2s   kubelet  Node ip-172-20-33-144.ap-northeast-2.compute.internal status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    8m2s   kubelet  Node ip-172-20-33-144.ap-northeast-2.compute.internal status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     8m2s   kubelet  Node ip-172-20-33-144.ap-northeast-2.compute.internal status is now: NodeHasSufficientPID\n  Normal  NodeAllocatableEnforced  8m2s   kubelet  Updated Node Allocatable limit across pods\n  Normal  NodeReady                7m42s  kubelet  Node ip-172-20-33-144.ap-northeast-2.compute.internal status is now: NodeReady\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1090
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":63,"failed":0}
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:04:32.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:35.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1435" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":12,"skipped":63,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:35.452: INFO: Only supported for providers [vsphere] (not aws)
... skipping 222 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":4,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:36.713: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 104 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets","total":-1,"completed":7,"skipped":19,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:04:22.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":96,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:39.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3160" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":13,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:39.776: INFO: Driver nfs doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-secret-654h
STEP: Creating a pod to test atomic-volume-subpath
May 29 01:04:15.581: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-654h" in namespace "subpath-6461" to be "Succeeded or Failed"
May 29 01:04:15.747: INFO: Pod "pod-subpath-test-secret-654h": Phase="Pending", Reason="", readiness=false. Elapsed: 165.887755ms
May 29 01:04:17.911: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 2.330256189s
May 29 01:04:20.076: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 4.494692239s
May 29 01:04:22.242: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 6.660610313s
May 29 01:04:24.406: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 8.825146143s
May 29 01:04:26.571: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 10.989675594s
May 29 01:04:28.735: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 13.154134297s
May 29 01:04:30.899: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 15.318171979s
May 29 01:04:33.065: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 17.484204337s
May 29 01:04:35.229: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 19.647931853s
May 29 01:04:37.395: INFO: Pod "pod-subpath-test-secret-654h": Phase="Running", Reason="", readiness=true. Elapsed: 21.814257085s
May 29 01:04:39.560: INFO: Pod "pod-subpath-test-secret-654h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.978795279s
STEP: Saw pod success
May 29 01:04:39.560: INFO: Pod "pod-subpath-test-secret-654h" satisfied condition "Succeeded or Failed"
May 29 01:04:39.724: INFO: Trying to get logs from node ip-172-20-52-235.ap-northeast-2.compute.internal pod pod-subpath-test-secret-654h container test-container-subpath-secret-654h: <nil>
STEP: delete the pod
May 29 01:04:40.075: INFO: Waiting for pod pod-subpath-test-secret-654h to disappear
May 29 01:04:40.238: INFO: Pod pod-subpath-test-secret-654h no longer exists
STEP: Deleting pod pod-subpath-test-secret-654h
May 29 01:04:40.238: INFO: Deleting pod "pod-subpath-test-secret-654h" in namespace "subpath-6461"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":58,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 29 01:03:41.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
May 29 01:04:07.146: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n"
May 29 01:04:07.146: INFO: stdout: "service-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8404
STEP: Deleting pod verify-service-up-exec-pod-47k9f in namespace services-8404
STEP: verifying service-headless is not up
May 29 01:04:07.486: INFO: Creating new host exec pod
May 29 01:04:13.981: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8404 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.248.11:80 && echo service-down-failed'
May 29 01:04:17.590: INFO: rc: 28
May 29 01:04:17.591: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.248.11:80 && echo service-down-failed" in pod services-8404/verify-service-down-host-exec-pod: error running /tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8404 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.248.11:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.248.11:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8404
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
May 29 01:04:18.096: INFO: Creating new host exec pod
May 29 01:04:22.592: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8404 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.168.232:80 && echo service-down-failed'
May 29 01:04:26.234: INFO: rc: 28
May 29 01:04:26.234: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.168.232:80 && echo service-down-failed" in pod services-8404/verify-service-down-host-exec-pod: error running /tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8404 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.168.232:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.65.168.232:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8404
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
May 29 01:04:26.737: INFO: Creating new host exec pod
... skipping 8 lines ...
May 29 01:04:35.738: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.168.232:80\n+ echo\n"
May 29 01:04:35.739: INFO: stdout: "service-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-mxm8n\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-5qq8l\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-zfkpf\nservice-headless-toggled-5qq8l\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8404
STEP: Deleting pod verify-service-up-exec-pod-5kd2q in namespace services-8404
STEP: verifying service-headless is still not up
May 29 01:04:36.080: INFO: Creating new host exec pod
May 29 01:04:38.578: INFO: Running '/tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8404 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.248.11:80 && echo service-down-failed'
May 29 01:04:42.224: INFO: rc: 28
May 29 01:04:42.224: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.248.11:80 && echo service-down-failed" in pod services-8404/verify-service-down-host-exec-pod: error running /tmp/kubectl4288843332/kubectl --server=https://api.e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8404 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.248.11:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.248.11:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8404
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:42.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:61.308 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2587
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":5,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
STEP: Creating pod
May 29 01:03:58.279: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 29 01:03:58.573: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-nhglz] to have phase Bound
May 29 01:03:58.873: INFO: PersistentVolumeClaim pvc-nhglz found but phase is Pending instead of Bound.
May 29 01:04:01.036: INFO: PersistentVolumeClaim pvc-nhglz found and phase=Bound (2.462265063s)
STEP: checking for CSIInlineVolumes feature
May 29 01:04:06.255: INFO: Error getting logs for pod inline-volume-5cxt9: the server rejected our request for an unknown reason (get pods inline-volume-5cxt9)
May 29 01:04:06.592: INFO: Deleting pod "inline-volume-5cxt9" in namespace "csi-mock-volumes-6298"
May 29 01:04:06.752: INFO: Wait up to 5m0s for pod "inline-volume-5cxt9" to be fully deleted
STEP: Deleting the previously created pod
May 29 01:04:13.071: INFO: Deleting pod "pvc-volume-tester-7dp5l" in namespace "csi-mock-volumes-6298"
May 29 01:04:13.232: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7dp5l" to be fully deleted
STEP: Checking CSI driver logs
May 29 01:04:21.719: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-7dp5l
May 29 01:04:21.719: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-6298
May 29 01:04:21.719: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: ca7cc2b1-fc72-453a-88ed-a2ea0ddec791
May 29 01:04:21.719: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
May 29 01:04:21.719: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
May 29 01:04:21.719: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ca7cc2b1-fc72-453a-88ed-a2ea0ddec791/volumes/kubernetes.io~csi/pvc-e6265bb5-042e-43bd-8725-187f70d1de69/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-7dp5l
May 29 01:04:21.719: INFO: Deleting pod "pvc-volume-tester-7dp5l" in namespace "csi-mock-volumes-6298"
STEP: Deleting claim pvc-nhglz
May 29 01:04:22.204: INFO: Waiting up to 2m0s for PersistentVolume pvc-e6265bb5-042e-43bd-8725-187f70d1de69 to get deleted
May 29 01:04:22.364: INFO: PersistentVolume pvc-e6265bb5-042e-43bd-8725-187f70d1de69 found and phase=Released (159.429087ms)
May 29 01:04:24.524: INFO: PersistentVolume pvc-e6265bb5-042e-43bd-8725-187f70d1de69 was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:437
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:487
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":11,"skipped":130,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:42.847: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 19 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94
May 29 01:04:40.449: INFO: Waiting up to 5m0s for pod "busybox-user-0-97454a86-dc96-4a57-8391-e40b39494884" in namespace "security-context-test-3744" to be "Succeeded or Failed"
May 29 01:04:40.609: INFO: Pod "busybox-user-0-97454a86-dc96-4a57-8391-e40b39494884": Phase="Pending", Reason="", readiness=false. Elapsed: 160.698061ms
May 29 01:04:42.770: INFO: Pod "busybox-user-0-97454a86-dc96-4a57-8391-e40b39494884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321658103s
May 29 01:04:42.770: INFO: Pod "busybox-user-0-97454a86-dc96-4a57-8391-e40b39494884" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 29 01:04:42.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3744" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":11,"skipped":97,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 104 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:502
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:43.295: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 52 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":79,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
May 29 01:04:48.645: INFO: Driver local doesn't support ntfs -- skipping
... skipping 104 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":8,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "nfs" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:97
------------------------------
... skipping 32949 lines ...






s \"default-token-vz4mj\" is forbidden: unable to create new content in namespace volumemode-7586 because it is being terminated\nE0529 01:10:26.501538       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-1207/pvc-4zbg7: storageclass.storage.k8s.io \"provisioning-1207\" not found\nI0529 01:10:26.501756       1 event.go:291] \"Event occurred\" object=\"provisioning-1207/pvc-4zbg7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1207\\\" not found\"\nI0529 01:10:26.667597       1 pv_controller.go:864] volume \"local-9h7lc\" entered phase \"Available\"\nI0529 01:10:26.888476       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-attacher-bw99x\" objectUID=a1c4d3ea-0ac9-45cc-b481-6f1c2bd5eebb kind=\"EndpointSlice\" virtual=false\nI0529 01:10:26.894143       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-attacher-bw99x\" objectUID=a1c4d3ea-0ac9-45cc-b481-6f1c2bd5eebb kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:27.069392       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-attacher-57b4968657\" objectUID=f5cb368d-eabb-4dd7-9876-a946f5462a43 kind=\"ControllerRevision\" virtual=false\nI0529 01:10:27.069663       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5064-9097/csi-hostpath-attacher\nI0529 01:10:27.069720       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-attacher-0\" objectUID=20dbb31f-316f-4c12-aa72-2940e4a3946d kind=\"Pod\" virtual=false\nI0529 01:10:27.071505       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-attacher-57b4968657\" objectUID=f5cb368d-eabb-4dd7-9876-a946f5462a43 kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:10:27.072524       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-attacher-0\" objectUID=20dbb31f-316f-4c12-aa72-2940e4a3946d kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:27.348986       1 namespace_controller.go:185] Namespace has been deleted provisioning-5064\nI0529 01:10:27.386123       1 pvc_protection_controller.go:291] PVC volume-3864/pvc-g42mc is unused\nI0529 01:10:27.391375       1 pv_controller.go:638] volume \"local-wnsq9\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:10:27.394095       1 pv_controller.go:864] volume \"local-wnsq9\" entered phase \"Released\"\nI0529 01:10:27.402732       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpathplugin-h65mq\" objectUID=0a0aaa7a-6985-4773-ad36-48e5b5f62b19 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:27.405668       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpathplugin-h65mq\" objectUID=0a0aaa7a-6985-4773-ad36-48e5b5f62b19 kind=\"EndpointSlice\" propagationPolicy=Background\nE0529 01:10:27.424694       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:27.552131       1 pv_controller_base.go:504] deletion of claim \"volume-3864/pvc-g42mc\" was already processed\nI0529 01:10:27.557393       1 namespace_controller.go:185] Namespace has been deleted watch-6012\nI0529 01:10:27.602075       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpathplugin-cd4c6b67f\" objectUID=de6ef614-96ea-431c-8195-b81ecb0db67e kind=\"ControllerRevision\" virtual=false\nI0529 01:10:27.602344       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5064-9097/csi-hostpathplugin\nI0529 01:10:27.602403       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpathplugin-0\" objectUID=0948dfd5-208e-43c0-a53b-33bc53fcf29e kind=\"Pod\" virtual=false\nI0529 01:10:27.605944       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpathplugin-0\" objectUID=0948dfd5-208e-43c0-a53b-33bc53fcf29e kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:27.606149       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpathplugin-cd4c6b67f\" objectUID=de6ef614-96ea-431c-8195-b81ecb0db67e kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:10:27.766018       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-provisioner-lzm9m\" objectUID=04f5573b-6605-4019-805b-3d05f754d8ef kind=\"EndpointSlice\" virtual=false\nI0529 01:10:27.768658       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-provisioner-lzm9m\" objectUID=04f5573b-6605-4019-805b-3d05f754d8ef kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:27.960137       1 pvc_protection_controller.go:291] PVC volume-282/pvc-97l6f is unused\nI0529 01:10:27.969038       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-provisioner-0\" objectUID=b6e7bb84-6db6-4985-a733-ebf326b905d6 kind=\"Pod\" virtual=false\nI0529 01:10:27.969254       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5064-9097/csi-hostpath-provisioner\nI0529 01:10:27.969311       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-provisioner-774487cbc9\" objectUID=fc9edf9e-ad29-45ca-9292-4fe249062fb5 kind=\"ControllerRevision\" virtual=false\nI0529 01:10:27.975291       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-provisioner-0\" objectUID=b6e7bb84-6db6-4985-a733-ebf326b905d6 kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:27.984019       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-provisioner-774487cbc9\" objectUID=fc9edf9e-ad29-45ca-9292-4fe249062fb5 kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:10:27.987000       1 pv_controller.go:638] volume \"local-9q6nd\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:10:27.992295       1 pv_controller.go:864] volume \"local-9q6nd\" entered phase \"Released\"\nI0529 01:10:28.133563       1 pv_controller_base.go:504] deletion of claim \"volume-282/pvc-97l6f\" was already processed\nI0529 01:10:28.150258       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-resizer-bnc6q\" objectUID=c409e554-48d9-4855-84ef-47eb348b9ed0 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:28.153132       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-resizer-bnc6q\" objectUID=c409e554-48d9-4855-84ef-47eb348b9ed0 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:28.324397       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-resizer-5586c6dc88\" objectUID=2a669269-f6be-4070-80e0-b0ccfae4854b kind=\"ControllerRevision\" virtual=false\nI0529 01:10:28.324661       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5064-9097/csi-hostpath-resizer\nI0529 01:10:28.324712       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-resizer-0\" objectUID=99122987-496a-4139-92e7-bfbdaf465385 kind=\"Pod\" virtual=false\nI0529 01:10:28.326800       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-resizer-0\" objectUID=99122987-496a-4139-92e7-bfbdaf465385 kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:28.326888       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-resizer-5586c6dc88\" objectUID=2a669269-f6be-4070-80e0-b0ccfae4854b kind=\"ControllerRevision\" propagationPolicy=Background\nE0529 01:10:28.452695       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:28.489873       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-snapshotter-4lqc7\" objectUID=9c27ba70-7365-4c25-84bf-1b751a5efbe6 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:28.493309       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-snapshotter-4lqc7\" objectUID=9c27ba70-7365-4c25-84bf-1b751a5efbe6 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:28.662815       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-snapshotter-7bcc8f8878\" objectUID=00da2779-1304-4cec-ae62-e997cc3d8398 kind=\"ControllerRevision\" virtual=false\nI0529 01:10:28.663074       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5064-9097/csi-hostpath-snapshotter\nI0529 01:10:28.663141       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5064-9097/csi-hostpath-snapshotter-0\" objectUID=9f80580c-ce5c-49cf-b9f2-0cfd7eb3c7ee kind=\"Pod\" virtual=false\nI0529 01:10:28.665582       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-snapshotter-0\" objectUID=9f80580c-ce5c-49cf-b9f2-0cfd7eb3c7ee kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:28.665799       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5064-9097/csi-hostpath-snapshotter-7bcc8f8878\" objectUID=00da2779-1304-4cec-ae62-e997cc3d8398 kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:10:29.274785       1 namespace_controller.go:185] Namespace has been deleted provisioning-8630\nI0529 01:10:29.352451       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-9351/test-quota\nI0529 01:10:29.834253       1 namespace_controller.go:185] Namespace has been deleted webhook-8435\nI0529 01:10:29.965267       1 namespace_controller.go:185] Namespace has been deleted webhook-8435-markers\nI0529 01:10:30.417457       1 namespace_controller.go:185] Namespace has been deleted dns-autoscaling-5396\nI0529 01:10:30.689411       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:30.886882       1 namespace_controller.go:185] Namespace has been deleted kubectl-1226\nI0529 01:10:30.908432       1 event.go:291] \"Event occurred\" object=\"webhook-3172/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0529 01:10:30.908625       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-3172/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0529 01:10:30.918354       1 event.go:291] \"Event occurred\" object=\"webhook-3172/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-m4fvs\"\nI0529 01:10:30.925185       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3172/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0529 01:10:31.084353       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:31.093565       1 event.go:291] \"Event occurred\" object=\"statefulset-3959/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE0529 01:10:31.105516       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:31.106599       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:31.436366       1 pvc_protection_controller.go:291] PVC volume-expand-3599/awsfnj4p is unused\nI0529 01:10:31.442708       1 pv_controller.go:638] volume \"pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4\" is released and reclaim policy \"Delete\" will be executed\nI0529 01:10:31.445561       1 pv_controller.go:864] volume \"pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4\" entered phase \"Released\"\nI0529 01:10:31.446779       1 pv_controller.go:1326] isVolumeReleased[pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4]: volume is released\nI0529 01:10:31.491340       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:31.553098       1 event.go:291] \"Event occurred\" object=\"mounted-volume-expand-3933/pvc-n8rss\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0529 01:10:31.563651       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-087691938771ad2fa: error deleting EBS volume \"vol-087691938771ad2fa\" since volume is currently attached to \"i-0a4e2805a6c116cdf\"\nE0529 01:10:31.563708       1 goroutinemap.go:150] Operation for \"delete-pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4[be1dffc7-d2ee-41ef-8bde-2301efedc884]\" failed. No retries permitted until 2021-05-29 01:10:32.063689848 +0000 UTC m=+964.229830263 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-087691938771ad2fa\\\" since volume is currently attached to \\\"i-0a4e2805a6c116cdf\\\"\"\nI0529 01:10:31.563867       1 event.go:291] \"Event occurred\" object=\"pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-087691938771ad2fa\\\" since volume is currently attached to \\\"i-0a4e2805a6c116cdf\\\"\"\nI0529 01:10:31.581739       1 namespace_controller.go:185] Namespace has been deleted volumemode-7586\nI0529 01:10:31.716691       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b\" need=1 creating=1\nI0529 01:10:31.717398       1 event.go:291] \"Event occurred\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b to 1\"\nI0529 01:10:31.723268       1 event.go:291] \"Event occurred\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b5b9h5\"\nI0529 01:10:31.731705       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0529 01:10:31.767746       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0529 01:10:31.884247       1 stateful_set_control.go:523] StatefulSet statefulset-2730/ss2 terminating Pod ss2-1 for update\nI0529 01:10:31.884745       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:31.894639       1 event.go:291] \"Event occurred\" object=\"statefulset-2730/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0529 01:10:31.898937       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:31.940630       1 namespace_controller.go:185] Namespace has been deleted volume-expand-3717-2563\nI0529 01:10:31.960648       1 namespace_controller.go:185] Namespace has been deleted emptydir-3959\nI0529 01:10:32.018464       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:32.085014       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:32.093362       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:32.175592       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9056/default: secrets \"default-token-n2q95\" is forbidden: unable to create new content in namespace provisioning-9056 because it is being terminated\nI0529 01:10:32.497759       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:32.790420       1 tokens_controller.go:262] error synchronizing serviceaccount volume-2333/default: secrets \"default-token-74psg\" is forbidden: unable to create new content in namespace volume-2333 because it is being terminated\nI0529 01:10:32.816538       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:33.411586       1 namespace_controller.go:185] Namespace has been deleted volumemode-7821\nI0529 01:10:33.667040       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:33.686559       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-3172/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.150.73).\nE0529 01:10:34.217202       1 tokens_controller.go:262] error synchronizing serviceaccount volume-3864/default: secrets \"default-token-h4x4g\" is forbidden: unable to create new content in namespace volume-3864 because it is being terminated\nI0529 01:10:34.259624       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-087691938771ad2fa\") on node \"ip-172-20-33-144.ap-northeast-2.compute.internal\" \nI0529 01:10:34.261499       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-087691938771ad2fa\") on node \"ip-172-20-33-144.ap-northeast-2.compute.internal\" \nI0529 01:10:34.468933       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:34.486660       1 garbagecollector.go:471] \"Processing object\" object=\"pods-6556/pod-submit-status-2-0\" objectUID=57a5b633-136f-4818-a4be-334c98e4bd82 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:10:34.488531       1 garbagecollector.go:580] \"Deleting object\" object=\"pods-6556/pod-submit-status-2-0\" objectUID=57a5b633-136f-4818-a4be-334c98e4bd82 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:10:34.674087       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:34.890785       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-9351/default: secrets \"default-token-xh7g4\" is forbidden: unable to create new content in namespace resourcequota-9351 because it is being terminated\nI0529 01:10:35.611765       1 namespace_controller.go:185] Namespace has been deleted endpointslice-3992\nI0529 01:10:35.968070       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8538-3830/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0529 01:10:36.195640       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:36.201385       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:36.206818       1 event.go:291] \"Event occurred\" object=\"statefulset-2730/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0529 01:10:36.211192       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:36.259847       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-7399/pod-cbe2f43e-656d-4b19-9e82-8fa1f4b3b2d8 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-68fp5 pvc- persistent-local-volumes-test-7399  f0411a4f-d09d-4402-b66a-17a97dae05db 34974 0 2021-05-29 01:10:25 +0000 UTC 2021-05-29 01:10:36 +0000 UTC 0xc003b38a78 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-29 01:10:25 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-29 01:10:25 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvc7kb4,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-7399,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0529 01:10:36.259990       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-7399/pvc-68fp5 because it is still being used\nI0529 01:10:36.323801       1 pv_controller.go:915] claim \"volume-9697/pvc-p9zbj\" bound to volume \"local-d67c5\"\nI0529 01:10:36.326557       1 pv_controller.go:1326] isVolumeReleased[pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4]: volume is released\nI0529 01:10:36.330862       1 pv_controller.go:864] volume \"local-d67c5\" entered phase \"Bound\"\nI0529 01:10:36.330887       1 pv_controller.go:967] volume \"local-d67c5\" bound to claim \"volume-9697/pvc-p9zbj\"\nI0529 01:10:36.337481       1 pv_controller.go:808] claim \"volume-9697/pvc-p9zbj\" entered phase \"Bound\"\nI0529 01:10:36.337649       1 pv_controller.go:915] claim \"provisioning-1207/pvc-4zbg7\" bound to volume \"local-9h7lc\"\nI0529 01:10:36.343158       1 pv_controller.go:864] volume \"local-9h7lc\" entered phase \"Bound\"\nI0529 01:10:36.343178       1 pv_controller.go:967] volume \"local-9h7lc\" bound to claim \"provisioning-1207/pvc-4zbg7\"\nI0529 01:10:36.347486       1 pv_controller.go:808] claim \"provisioning-1207/pvc-4zbg7\" entered phase \"Bound\"\nI0529 01:10:36.473343       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-087691938771ad2fa: error deleting EBS volume \"vol-087691938771ad2fa\" since volume is currently attached to \"i-0a4e2805a6c116cdf\"\nE0529 01:10:36.473410       1 goroutinemap.go:150] Operation for \"delete-pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4[be1dffc7-d2ee-41ef-8bde-2301efedc884]\" failed. No retries permitted until 2021-05-29 01:10:37.473391543 +0000 UTC m=+969.639531959 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-087691938771ad2fa\\\" since volume is currently attached to \\\"i-0a4e2805a6c116cdf\\\"\"\nI0529 01:10:36.473728       1 event.go:291] \"Event occurred\" object=\"pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-087691938771ad2fa\\\" since volume is currently attached to \\\"i-0a4e2805a6c116cdf\\\"\"\nE0529 01:10:36.732985       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-7499/pvc-cb5q7: storageclass.storage.k8s.io \"volume-7499\" not found\nI0529 01:10:36.733247       1 event.go:291] \"Event occurred\" object=\"volume-7499/pvc-cb5q7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-7499\\\" not found\"\nI0529 01:10:36.897625       1 pv_controller.go:864] volume \"nfs-n5m44\" entered phase \"Available\"\nI0529 01:10:37.113094       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-0315911e9bd70dfc0\nI0529 01:10:37.115399       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-7399/pod-cbe2f43e-656d-4b19-9e82-8fa1f4b3b2d8 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-68fp5 pvc- persistent-local-volumes-test-7399  f0411a4f-d09d-4402-b66a-17a97dae05db 34974 0 2021-05-29 01:10:25 +0000 UTC 2021-05-29 01:10:36 +0000 UTC 0xc003b38a78 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-29 01:10:25 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-29 01:10:25 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvc7kb4,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-7399,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0529 01:10:37.115479       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-7399/pvc-68fp5 because it is still being used\nI0529 01:10:37.161974       1 pv_controller.go:1652] volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" provisioned for claim \"mounted-volume-expand-3933/pvc-n8rss\"\nI0529 01:10:37.162153       1 event.go:291] \"Event occurred\" object=\"mounted-volume-expand-3933/pvc-n8rss\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840 using kubernetes.io/aws-ebs\"\nI0529 01:10:37.164664       1 pv_controller.go:864] volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" entered phase \"Bound\"\nI0529 01:10:37.164690       1 pv_controller.go:967] volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" bound to claim \"mounted-volume-expand-3933/pvc-n8rss\"\nI0529 01:10:37.171386       1 pv_controller.go:808] claim \"mounted-volume-expand-3933/pvc-n8rss\" entered phase \"Bound\"\nI0529 01:10:37.208527       1 namespace_controller.go:185] Namespace has been deleted provisioning-9056\nI0529 01:10:37.299012       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:37.353129       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3172/e2e-test-webhook-rmgw4\" objectUID=e4f2e145-cdef-4d15-b32b-3d83fed41dfc kind=\"EndpointSlice\" virtual=false\nI0529 01:10:37.356024       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3172/e2e-test-webhook-rmgw4\" objectUID=e4f2e145-cdef-4d15-b32b-3d83fed41dfc kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:37.520018       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3172/sample-webhook-deployment-6bd9446d55\" objectUID=75800f4e-45f2-4854-97cd-e7ba8821ac95 kind=\"ReplicaSet\" virtual=false\nI0529 01:10:37.520231       1 deployment_controller.go:581] Deployment webhook-3172/sample-webhook-deployment has been deleted\nI0529 01:10:37.764379       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0315911e9bd70dfc0\") from node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:10:37.809701       1 aws.go:2014] Assigned mount device ch -> volume vol-0315911e9bd70dfc0\nI0529 01:10:37.861144       1 namespace_controller.go:185] Namespace has been deleted volume-2333\nI0529 01:10:38.027551       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3172/sample-webhook-deployment-6bd9446d55\" objectUID=75800f4e-45f2-4854-97cd-e7ba8821ac95 kind=\"ReplicaSet\" propagationPolicy=Background\nI0529 01:10:38.029916       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3172/sample-webhook-deployment-6bd9446d55-m4fvs\" objectUID=110933e1-6935-4c63-8bf8-e03820643171 kind=\"Pod\" virtual=false\nI0529 01:10:38.031326       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3172/sample-webhook-deployment-6bd9446d55-m4fvs\" objectUID=110933e1-6935-4c63-8bf8-e03820643171 kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:38.038206       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3172/sample-webhook-deployment-6bd9446d55-m4fvs\" objectUID=d6168b91-8c3e-407c-b961-b29368ffd240 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:10:38.044211       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3172/sample-webhook-deployment-6bd9446d55-m4fvs\" objectUID=d6168b91-8c3e-407c-b961-b29368ffd240 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:10:38.155927       1 aws.go:2427] AttachVolume volume=\"vol-0315911e9bd70dfc0\" instance=\"i-0e1599e52cb362162\" request returned {\n  AttachTime: 2021-05-29 01:10:38.124 +0000 UTC,\n  Device: \"/dev/xvdch\",\n  InstanceId: \"i-0e1599e52cb362162\",\n  State: \"attaching\",\n  VolumeId: \"vol-0315911e9bd70dfc0\"\n}\nI0529 01:10:38.302931       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:38.361163       1 pvc_protection_controller.go:291] PVC provisioning-7643/pvc-vktjw is unused\nI0529 01:10:38.366050       1 pv_controller.go:638] volume \"local-f4nqh\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:10:38.368775       1 pv_controller.go:864] volume \"local-f4nqh\" entered phase \"Released\"\nI0529 01:10:38.524620       1 pv_controller_base.go:504] deletion of claim \"provisioning-7643/pvc-vktjw\" was already processed\nI0529 01:10:38.669342       1 stateful_set_control.go:523] StatefulSet statefulset-2730/ss2 terminating Pod ss2-0 for update\nI0529 01:10:38.672407       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:38.682398       1 event.go:291] \"Event occurred\" object=\"statefulset-2730/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0529 01:10:38.688395       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:38.780727       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:38.875196       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-1c189026-490e-4aa0-86f9-55dd48c3b4a3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-1961^94f895b0-c01a-11eb-ba8a-fe8a0765053e\") on node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:10:38.877485       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-1c189026-490e-4aa0-86f9-55dd48c3b4a3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-1961^94f895b0-c01a-11eb-ba8a-fe8a0765053e\") on node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:10:38.879139       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-1c189026-490e-4aa0-86f9-55dd48c3b4a3\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-1961^94f895b0-c01a-11eb-ba8a-fe8a0765053e\") on node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:10:38.930435       1 utils.go:413] couldn't find ipfamilies for headless service: services-3614/nodeport-range-test. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.185.207).\nI0529 01:10:38.958610       1 stateful_set_control.go:523] StatefulSet statefulset-3959/ss2 terminating Pod ss2-2 for update\nI0529 01:10:38.961942       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:38.962462       1 event.go:291] \"Event occurred\" object=\"statefulset-3959/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0529 01:10:38.981317       1 namespace_controller.go:185] Namespace has been deleted kubectl-9221\nI0529 01:10:39.149691       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:39.283390       1 namespace_controller.go:185] Namespace has been deleted volume-3864\nI0529 01:10:39.422015       1 garbagecollector.go:471] \"Processing object\" object=\"services-3614/nodeport-range-test-r4mbt\" objectUID=39b31c14-7c67-4349-adbc-19dfd297ed0e kind=\"EndpointSlice\" virtual=false\nI0529 01:10:39.425961       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3614/nodeport-range-test-r4mbt\" objectUID=39b31c14-7c67-4349-adbc-19dfd297ed0e kind=\"EndpointSlice\" propagationPolicy=Background\nE0529 01:10:39.518416       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-6788/default: secrets \"default-token-fq9f5\" is forbidden: unable to create new content in namespace secrets-6788 because it is being terminated\nI0529 01:10:39.658942       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-087691938771ad2fa\") on node \"ip-172-20-33-144.ap-northeast-2.compute.internal\" \nI0529 01:10:39.681769       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:39.957999       1 namespace_controller.go:185] Namespace has been deleted resourcequota-9351\nI0529 01:10:39.981699       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:40.000130       1 deployment_controller.go:581] Deployment gc-5909/simpletest.deployment has been deleted\nI0529 01:10:40.285175       1 aws.go:2037] Releasing in-process attachment entry: ch -> volume vol-0315911e9bd70dfc0\nI0529 01:10:40.285225       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0315911e9bd70dfc0\") from node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:10:40.285391       1 event.go:291] \"Event occurred\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b5b9h5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\\\" \"\nI0529 01:10:40.584035       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-7399/pod-cbe2f43e-656d-4b19-9e82-8fa1f4b3b2d8 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-68fp5 pvc- persistent-local-volumes-test-7399  f0411a4f-d09d-4402-b66a-17a97dae05db 34974 0 2021-05-29 01:10:25 +0000 UTC 2021-05-29 01:10:36 +0000 UTC 0xc003b38a78 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-29 01:10:25 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-29 01:10:25 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvc7kb4,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-7399,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0529 01:10:40.584119       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-7399/pvc-68fp5 because it is still being used\nI0529 01:10:40.596239       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-7399/pvc-68fp5 is unused\nI0529 01:10:40.601802       1 pv_controller.go:638] volume \"local-pvc7kb4\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:10:40.604673       1 pv_controller.go:864] volume \"local-pvc7kb4\" entered phase \"Released\"\nI0529 01:10:40.608483       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-7399/pvc-68fp5\" was already processed\nI0529 01:10:41.454565       1 pvc_protection_controller.go:291] PVC volume-1663/pvc-95hlh is unused\nI0529 01:10:41.460933       1 pv_controller.go:638] volume \"local-x8b4n\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:10:41.465136       1 pv_controller.go:864] volume \"local-x8b4n\" entered phase \"Released\"\nI0529 01:10:41.618555       1 pv_controller_base.go:504] deletion of claim \"volume-1663/pvc-95hlh\" was already processed\nI0529 01:10:41.674642       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:41.714086       1 tokens_controller.go:262] error synchronizing serviceaccount events-7773/default: secrets \"default-token-jdf6r\" is forbidden: unable to create new content in namespace events-7773 because it is being terminated\nE0529 01:10:41.873469       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:41.976474       1 namespace_controller.go:185] Namespace has been deleted volume-282\nI0529 01:10:42.322812       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7400/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:42.460886       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8538/pvc-qgznx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8538\\\" or manually created by system administrator\"\nI0529 01:10:42.461081       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8538/pvc-qgznx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8538\\\" or manually created by system administrator\"\nI0529 01:10:42.483203       1 pv_controller.go:864] volume \"pvc-d76235fc-6fe5-4e65-9c31-65602b3d30f5\" entered phase \"Bound\"\nI0529 01:10:42.483232       1 pv_controller.go:967] volume \"pvc-d76235fc-6fe5-4e65-9c31-65602b3d30f5\" bound to claim \"csi-mock-volumes-8538/pvc-qgznx\"\nI0529 01:10:42.491036       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7400/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:42.499396       1 pv_controller.go:808] claim \"csi-mock-volumes-8538/pvc-qgznx\" entered phase \"Bound\"\nI0529 01:10:43.328109       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7400/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:43.940775       1 pvc_protection_controller.go:291] PVC provisioning-1207/pvc-4zbg7 is unused\nI0529 01:10:43.946085       1 pv_controller.go:638] volume \"local-9h7lc\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:10:43.948725       1 pv_controller.go:864] volume \"local-9h7lc\" entered phase \"Released\"\nI0529 01:10:43.979301       1 event.go:291] \"Event occurred\" object=\"volume-818/nfsmvnjt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volume-818\\\" or manually created by system administrator\"\nI0529 01:10:44.106914       1 pv_controller_base.go:504] deletion of claim \"provisioning-1207/pvc-4zbg7\" was already processed\nI0529 01:10:44.175508       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/clusterip-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.92.121).\nI0529 01:10:44.267156       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:44.273479       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:44.276918       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-8407/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:44.281554       1 event.go:291] \"Event occurred\" object=\"statefulset-3959/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0529 01:10:44.281841       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:44.320977       1 namespace_controller.go:185] Namespace has been deleted provisioning-5064-9097\nI0529 01:10:44.342851       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:10:44.506981       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-4561/externalsvc\" need=2 creating=2\nI0529 01:10:44.522189       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:10:44.523342       1 event.go:291] \"Event occurred\" object=\"services-4561/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-j6ddd\"\nI0529 01:10:44.542740       1 namespace_controller.go:185] Namespace has been deleted secrets-6788\nI0529 01:10:44.550037       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:10:44.552496       1 event.go:291] \"Event occurred\" object=\"services-4561/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-nnqh6\"\nE0529 01:10:44.883230       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-8255/pvc-8z7jx: storageclass.storage.k8s.io \"volume-8255\" not found\nI0529 01:10:44.883503       1 event.go:291] \"Event occurred\" object=\"volume-8255/pvc-8z7jx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8255\\\" not found\"\nI0529 01:10:44.886775       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7400/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:45.128759       1 pv_controller.go:864] volume \"aws-tfz7k\" entered phase \"Available\"\nI0529 01:10:45.180941       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/clusterip-service. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.92.121).\nI0529 01:10:45.293350       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-8407/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:45.419948       1 pvc_protection_controller.go:291] PVC volume-9697/pvc-p9zbj is unused\nI0529 01:10:45.425105       1 pv_controller.go:638] volume \"local-d67c5\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:10:45.428922       1 pv_controller.go:864] volume \"local-d67c5\" entered phase \"Released\"\nI0529 01:10:45.509214       1 pv_controller.go:864] volume \"pvc-b8deaff5-a28c-4c94-9e98-d91c21d9fb9e\" entered phase \"Bound\"\nI0529 01:10:45.509246       1 pv_controller.go:967] volume \"pvc-b8deaff5-a28c-4c94-9e98-d91c21d9fb9e\" bound to claim \"volume-818/nfsmvnjt\"\nI0529 01:10:45.514617       1 pv_controller.go:808] claim \"volume-818/nfsmvnjt\" entered phase \"Bound\"\nI0529 01:10:45.586266       1 pv_controller_base.go:504] deletion of claim \"volume-9697/pvc-p9zbj\" was already processed\nI0529 01:10:46.784607       1 namespace_controller.go:185] Namespace has been deleted events-7773\nI0529 01:10:47.175078       1 namespace_controller.go:185] Namespace has been deleted webhook-3172\nI0529 01:10:47.332643       1 namespace_controller.go:185] Namespace has been deleted webhook-3172-markers\nE0529 01:10:47.937045       1 tokens_controller.go:262] error synchronizing serviceaccount volume-8564/default: secrets \"default-token-5drhv\" is forbidden: unable to create new content in namespace volume-8564 because it is being terminated\nI0529 01:10:48.058590       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7399\nE0529 01:10:48.274317       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-6324/default: secrets \"default-token-x5jjq\" is forbidden: unable to create new content in namespace downward-api-6324 because it is being terminated\nI0529 01:10:48.461429       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7400/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:48.475970       1 garbagecollector.go:471] \"Processing object\" object=\"dns-7400/dns-test-a4f930f2-22de-45cf-b7fb-8508e3b33604\" objectUID=c554feba-579d-4905-9789-ccc21ddbf49b kind=\"CiliumEndpoint\" virtual=false\nI0529 01:10:48.480121       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7400/dns-test-service-2 likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:48.483548       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7643/default: secrets \"default-token-pmhrk\" is forbidden: unable to create new content in namespace provisioning-7643 because it is being terminated\nI0529 01:10:48.483675       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-7400/dns-test-a4f930f2-22de-45cf-b7fb-8508e3b33604\" objectUID=c554feba-579d-4905-9789-ccc21ddbf49b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:10:48.594214       1 pvc_protection_controller.go:291] PVC volume-expand-1961/csi-hostpathn85v7 is unused\nI0529 01:10:48.599682       1 pv_controller.go:638] volume \"pvc-1c189026-490e-4aa0-86f9-55dd48c3b4a3\" is released and reclaim policy \"Delete\" will be executed\nI0529 01:10:48.602706       1 pv_controller.go:864] volume \"pvc-1c189026-490e-4aa0-86f9-55dd48c3b4a3\" entered phase \"Released\"\nI0529 01:10:48.605044       1 pv_controller.go:1326] isVolumeReleased[pvc-1c189026-490e-4aa0-86f9-55dd48c3b4a3]: volume is released\nI0529 01:10:48.644287       1 garbagecollector.go:471] \"Processing object\" object=\"dns-7400/dns-test-service-2-rd886\" objectUID=c4980049-415f-4256-96fe-e59e8fc04a51 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:48.644769       1 garbagecollector.go:471] \"Processing object\" object=\"dns-7400/dns-test-service-2-k796t\" objectUID=be271cd7-1bf8-492b-8ea5-c21c1473deb7 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:48.648075       1 pv_controller_base.go:504] deletion of claim \"volume-expand-1961/csi-hostpathn85v7\" was already processed\nI0529 01:10:48.651374       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-7400/dns-test-service-2-rd886\" objectUID=c4980049-415f-4256-96fe-e59e8fc04a51 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:48.651651       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-7400/dns-test-service-2-k796t\" objectUID=be271cd7-1bf8-492b-8ea5-c21c1473deb7 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:48.715873       1 event.go:291] \"Event occurred\" object=\"webhook-6501/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0529 01:10:48.716314       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-6501/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0529 01:10:48.726435       1 event.go:291] \"Event occurred\" object=\"webhook-6501/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-lzh2x\"\nI0529 01:10:48.733462       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-6501/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0529 01:10:48.832613       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:49.467685       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nE0529 01:10:49.492921       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:49.515096       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:10:49.764807       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:49.778427       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-3959/ss2-0\" objectUID=1b831367-b264-4241-8541-f3974715934a kind=\"CiliumEndpoint\" virtual=false\nI0529 01:10:49.787272       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:49.788378       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-3959/ss2-0\" objectUID=1b831367-b264-4241-8541-f3974715934a kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:10:49.795280       1 event.go:291] \"Event occurred\" object=\"statefulset-3959/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0529 01:10:49.795552       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:49.940573       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:49.946249       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-3959/ss2-2\" objectUID=6d7a55a8-1c84-465e-8a08-64a559ccc30c kind=\"CiliumEndpoint\" virtual=false\nI0529 01:10:49.953340       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:49.953645       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-3959/ss2-2\" objectUID=6d7a55a8-1c84-465e-8a08-64a559ccc30c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:10:50.029679       1 namespace_controller.go:185] Namespace has been deleted services-3614\nI0529 01:10:50.282398       1 operation_generator.go:1442] ExpandVolume succeeded for volume mounted-volume-expand-3933/pvc-n8rss\nI0529 01:10:50.286558       1 operation_generator.go:1454] ExpandVolume.UpdatePV succeeded for volume mounted-volume-expand-3933/pvc-n8rss\nI0529 01:10:50.299394       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-8407/test-hvrtw\" objectUID=e07d9560-9c05-4aea-962c-03853cf175c8 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:50.299814       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-8407/test-crxwb\" objectUID=d56f9910-cc27-4a04-92fa-1b9a206f518a kind=\"EndpointSlice\" virtual=false\nI0529 01:10:50.302192       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-8407/test-hvrtw\" objectUID=e07d9560-9c05-4aea-962c-03853cf175c8 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:50.302566       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-8407/test-crxwb\" objectUID=d56f9910-cc27-4a04-92fa-1b9a206f518a kind=\"EndpointSlice\" propagationPolicy=Background\nE0529 01:10:50.335707       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-8407/default: secrets \"default-token-ks5w2\" is forbidden: unable to create new content in namespace statefulset-8407 because it is being terminated\nI0529 01:10:50.471400       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:10:50.779489       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:51.122356       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-2052/pvc-dt65w: storageclass.storage.k8s.io \"provisioning-2052\" not found\nI0529 01:10:51.122661       1 event.go:291] \"Event occurred\" object=\"provisioning-2052/pvc-dt65w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2052\\\" not found\"\nI0529 01:10:51.193249       1 utils.go:424] couldn't find ipfamilies for headless service: services-4561/clusterip-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:51.291747       1 pv_controller.go:864] volume \"local-g5n84\" entered phase \"Available\"\nI0529 01:10:51.323773       1 pv_controller.go:915] claim \"volume-8255/pvc-8z7jx\" bound to volume \"aws-tfz7k\"\nI0529 01:10:51.327589       1 pv_controller.go:1326] isVolumeReleased[pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4]: volume is released\nI0529 01:10:51.330849       1 pv_controller.go:864] volume \"aws-tfz7k\" entered phase \"Bound\"\nI0529 01:10:51.330872       1 pv_controller.go:967] volume \"aws-tfz7k\" bound to claim \"volume-8255/pvc-8z7jx\"\nI0529 01:10:51.335762       1 pv_controller.go:808] claim \"volume-8255/pvc-8z7jx\" entered phase \"Bound\"\nI0529 01:10:51.335851       1 pv_controller.go:915] claim \"provisioning-2052/pvc-dt65w\" bound to volume \"local-g5n84\"\nI0529 01:10:51.341570       1 pv_controller.go:864] volume \"local-g5n84\" entered phase \"Bound\"\nI0529 01:10:51.341593       1 pv_controller.go:967] volume \"local-g5n84\" bound to claim \"provisioning-2052/pvc-dt65w\"\nI0529 01:10:51.348438       1 pv_controller.go:808] claim \"provisioning-2052/pvc-dt65w\" entered phase \"Bound\"\nI0529 01:10:51.348499       1 pv_controller.go:915] claim \"volume-7499/pvc-cb5q7\" bound to volume \"nfs-n5m44\"\nI0529 01:10:51.354173       1 pv_controller.go:864] volume \"nfs-n5m44\" entered phase \"Bound\"\nI0529 01:10:51.354198       1 pv_controller.go:967] volume \"nfs-n5m44\" bound to claim \"volume-7499/pvc-cb5q7\"\nI0529 01:10:51.363959       1 pv_controller.go:808] claim \"volume-7499/pvc-cb5q7\" entered phase \"Bound\"\nI0529 01:10:51.505670       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-northeast-2a/vol-087691938771ad2fa\nI0529 01:10:51.505699       1 pv_controller.go:1421] volume \"pvc-d5c07a17-7358-4b22-b9ab-058cc8deb9f4\" deleted\nI0529 01:10:51.514637       1 pv_controller_base.go:504] deletion of claim \"volume-expand-3599/awsfnj4p\" was already processed\nI0529 01:10:51.530573       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-6501/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.236.29).\nE0529 01:10:51.673326       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-7207/default: secrets \"default-token-l2wdn\" is forbidden: unable to create new content in namespace configmap-7207 because it is being terminated\nI0529 01:10:51.673617       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b\" need=1 creating=1\nI0529 01:10:51.682125       1 garbagecollector.go:471] \"Processing object\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b5b9h5\" objectUID=343c390a-b08e-4e22-90bb-d5151252142c kind=\"CiliumEndpoint\" virtual=false\nI0529 01:10:51.690049       1 garbagecollector.go:580] \"Deleting object\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b5b9h5\" objectUID=343c390a-b08e-4e22-90bb-d5151252142c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:10:51.692042       1 event.go:291] \"Event occurred\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8bl7gsw\"\nI0529 01:10:51.727007       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:51.734863       1 event.go:291] \"Event occurred\" object=\"statefulset-2730/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0529 01:10:51.742727       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:52.057338       1 pv_controller.go:864] volume \"local-pvx4n6p\" entered phase \"Available\"\nI0529 01:10:52.202725       1 utils.go:424] couldn't find ipfamilies for headless service: services-4561/clusterip-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:52.212871       1 pv_controller.go:915] claim \"persistent-local-volumes-test-6326/pvc-cpbkx\" bound to volume \"local-pvx4n6p\"\nI0529 01:10:52.218666       1 pv_controller.go:864] volume \"local-pvx4n6p\" entered phase \"Bound\"\nI0529 01:10:52.218701       1 pv_controller.go:967] volume \"local-pvx4n6p\" bound to claim \"persistent-local-volumes-test-6326/pvc-cpbkx\"\nI0529 01:10:52.224968       1 pv_controller.go:808] claim \"persistent-local-volumes-test-6326/pvc-cpbkx\" entered phase \"Bound\"\nI0529 01:10:52.435551       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-tfz7k\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0ca935930550a37df\") from node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" \nI0529 01:10:52.513959       1 aws.go:2014] Assigned mount device cb -> volume vol-0ca935930550a37df\nI0529 01:10:52.535281       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-6501/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.236.29).\nI0529 01:10:52.857518       1 aws.go:2427] AttachVolume volume=\"vol-0ca935930550a37df\" instance=\"i-01b30dd3a1104aa2c\" request returned {\n  AttachTime: 2021-05-29 01:10:52.847 +0000 UTC,\n  Device: \"/dev/xvdcb\",\n  InstanceId: \"i-01b30dd3a1104aa2c\",\n  State: \"attaching\",\n  VolumeId: \"vol-0ca935930550a37df\"\n}\nI0529 01:10:53.019951       1 namespace_controller.go:185] Namespace has been deleted volume-8564\nI0529 01:10:53.337581       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:53.402278       1 namespace_controller.go:185] Namespace has been deleted downward-api-6324\nI0529 01:10:53.532417       1 namespace_controller.go:185] Namespace has been deleted volume-1663\nI0529 01:10:53.563068       1 namespace_controller.go:185] Namespace has been deleted provisioning-7643\nE0529 01:10:53.686310       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-1364/pvc-njgn5: storageclass.storage.k8s.io \"volume-1364\" not found\nI0529 01:10:53.686585       1 event.go:291] \"Event occurred\" object=\"volume-1364/pvc-njgn5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-1364\\\" not found\"\nI0529 01:10:53.853857       1 pv_controller.go:864] volume \"local-5clnr\" entered phase \"Available\"\nE0529 01:10:54.064771       1 tokens_controller.go:262] error synchronizing serviceaccount dns-7400/default: secrets \"default-token-wsnb7\" is forbidden: unable to create new content in namespace dns-7400 because it is being terminated\nE0529 01:10:54.468674       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0529 01:10:54.699584       1 tokens_controller.go:262] error synchronizing serviceaccount volume-9697/default: secrets \"default-token-n6gq6\" is forbidden: unable to create new content in namespace volume-9697 because it is being terminated\nI0529 01:10:54.722885       1 event.go:291] \"Event occurred\" object=\"statefulset-3959/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0529 01:10:54.723086       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:54.959168       1 aws.go:2037] Releasing in-process attachment entry: cb -> volume vol-0ca935930550a37df\nI0529 01:10:54.959219       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"aws-tfz7k\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0ca935930550a37df\") from node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" \nI0529 01:10:54.959360       1 event.go:291] \"Event occurred\" object=\"volume-8255/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-tfz7k\\\" \"\nI0529 01:10:55.357070       1 namespace_controller.go:185] Namespace has been deleted statefulset-8407\nI0529 01:10:55.471266       1 aws.go:1819] Found instances in zones map[ap-northeast-2a:{}]\nI0529 01:10:55.735201       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:10:55.801895       1 namespace_controller.go:185] Namespace has been deleted provisioning-1207\nI0529 01:10:56.717256       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:56.782451       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:56.796264       1 namespace_controller.go:185] Namespace has been deleted configmap-7207\nI0529 01:10:57.316373       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:57.426691       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:10:57.722321       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:10:57.814342       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9428/default: secrets \"default-token-57xl4\" is forbidden: unable to create new content in namespace provisioning-9428 because it is being terminated\nI0529 01:10:58.001706       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.231.222).\nI0529 01:10:58.128882       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-8538/pvc-qgznx is unused\nI0529 01:10:58.135056       1 pv_controller.go:638] volume \"pvc-d76235fc-6fe5-4e65-9c31-65602b3d30f5\" is released and reclaim policy \"Delete\" will be executed\nI0529 01:10:58.138151       1 pv_controller.go:864] volume \"pvc-d76235fc-6fe5-4e65-9c31-65602b3d30f5\" entered phase \"Released\"\nI0529 01:10:58.140246       1 pv_controller.go:1326] isVolumeReleased[pvc-d76235fc-6fe5-4e65-9c31-65602b3d30f5]: volume is released\nI0529 01:10:58.149224       1 pv_controller_base.go:504] deletion of claim \"csi-mock-volumes-8538/pvc-qgznx\" was already processed\nI0529 01:10:58.168764       1 event.go:291] \"Event occurred\" object=\"volume-expand-5590-693/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0529 01:10:58.169020       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.231.222).\nI0529 01:10:58.485851       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.203.108).\nI0529 01:10:58.496192       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-attacher-p8zsb\" objectUID=4cbc0c5b-f9cc-430e-94a5-8a271358b9e2 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:58.499567       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-attacher-p8zsb\" objectUID=4cbc0c5b-f9cc-430e-94a5-8a271358b9e2 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:58.654693       1 event.go:291] \"Event occurred\" object=\"volume-expand-5590-693/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0529 01:10:58.655027       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.203.108).\nI0529 01:10:58.680145       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-attacher-5f55f6b47d\" objectUID=7ef1521f-d36a-4b00-9ec7-5384b89d9d20 kind=\"ControllerRevision\" virtual=false\nI0529 01:10:58.684538       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-1961-4336/csi-hostpath-attacher\nI0529 01:10:58.684777       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-attacher-0\" objectUID=0a843ec0-f830-4981-9934-d8a32432d51d kind=\"Pod\" virtual=false\nI0529 01:10:58.685724       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-attacher-5f55f6b47d\" objectUID=7ef1521f-d36a-4b00-9ec7-5384b89d9d20 kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:10:58.692162       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-attacher-0\" objectUID=0a843ec0-f830-4981-9934-d8a32432d51d kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:58.807787       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.90.209).\nE0529 01:10:58.950097       1 tokens_controller.go:262] error synchronizing serviceaccount replicaset-2372/default: secrets \"default-token-pwnp9\" is forbidden: unable to create new content in namespace replicaset-2372 because it is being terminated\nI0529 01:10:58.976321       1 event.go:291] \"Event occurred\" object=\"volume-expand-5590-693/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0529 01:10:58.976557       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.90.209).\nI0529 01:10:58.993527       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpathplugin-9tvxg\" objectUID=9fbbec50-9f76-44b0-9ee4-c3c07a5b1ee0 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:58.997528       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpathplugin-9tvxg\" objectUID=9fbbec50-9f76-44b0-9ee4-c3c07a5b1ee0 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:59.005590       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.231.222).\nI0529 01:10:59.130980       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.239.223).\nI0529 01:10:59.160510       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpathplugin-58bbff7c8\" objectUID=71c5d574-3cbf-4342-9bb5-42478f0ea2f8 kind=\"ControllerRevision\" virtual=false\nI0529 01:10:59.160768       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-1961-4336/csi-hostpathplugin\nI0529 01:10:59.160846       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpathplugin-0\" objectUID=170e27c7-7037-4dbb-9c88-3987258f2399 kind=\"Pod\" virtual=false\nI0529 01:10:59.162893       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpathplugin-0\" objectUID=170e27c7-7037-4dbb-9c88-3987258f2399 kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:59.162915       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpathplugin-58bbff7c8\" objectUID=71c5d574-3cbf-4342-9bb5-42478f0ea2f8 kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:10:59.168073       1 namespace_controller.go:185] Namespace has been deleted dns-7400\nI0529 01:10:59.255970       1 namespace_controller.go:185] Namespace has been deleted volume-expand-1961\nI0529 01:10:59.294920       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.239.223).\nI0529 01:10:59.295453       1 event.go:291] \"Event occurred\" object=\"volume-expand-5590-693/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0529 01:10:59.316084       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-provisioner-52p6d\" objectUID=38c70547-0919-471a-b1eb-2b06bf1ca50a kind=\"EndpointSlice\" virtual=false\nI0529 01:10:59.318783       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-provisioner-52p6d\" objectUID=38c70547-0919-471a-b1eb-2b06bf1ca50a kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:59.454436       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.82.41).\nI0529 01:10:59.484411       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-provisioner-7448df7b7d\" objectUID=a6c7ecba-e924-4757-84a2-2b1a45f66dcc kind=\"ControllerRevision\" virtual=false\nI0529 01:10:59.484661       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-1961-4336/csi-hostpath-provisioner\nI0529 01:10:59.484718       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-provisioner-0\" objectUID=ad0b85eb-bcb0-4ee1-9183-a0cf5c24fc71 kind=\"Pod\" virtual=false\nI0529 01:10:59.487008       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-provisioner-0\" objectUID=ad0b85eb-bcb0-4ee1-9183-a0cf5c24fc71 kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:59.487158       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-provisioner-7448df7b7d\" objectUID=a6c7ecba-e924-4757-84a2-2b1a45f66dcc kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:10:59.621621       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.82.41).\nI0529 01:10:59.621755       1 event.go:291] \"Event occurred\" object=\"volume-expand-5590-693/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0529 01:10:59.641002       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-resizer-cskth\" objectUID=f7f5f261-ada2-4c45-ac0b-0a60e9175bd6 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:59.644895       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-resizer-cskth\" objectUID=f7f5f261-ada2-4c45-ac0b-0a60e9175bd6 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:10:59.806024       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-resizer-857c56764f\" objectUID=7cc7145f-5af7-4fd2-ac9d-f56e436232ee kind=\"ControllerRevision\" virtual=false\nI0529 01:10:59.806283       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-1961-4336/csi-hostpath-resizer\nI0529 01:10:59.806344       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-resizer-0\" objectUID=e3d64441-747f-4b43-84b7-14db7a25208a kind=\"Pod\" virtual=false\nI0529 01:10:59.808126       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-resizer-857c56764f\" objectUID=7cc7145f-5af7-4fd2-ac9d-f56e436232ee kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:10:59.808393       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-resizer-0\" objectUID=e3d64441-747f-4b43-84b7-14db7a25208a kind=\"Pod\" propagationPolicy=Background\nI0529 01:10:59.811404       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.90.209).\nI0529 01:10:59.824818       1 namespace_controller.go:185] Namespace has been deleted volume-9697\nI0529 01:10:59.855893       1 pvc_protection_controller.go:291] PVC provisioning-2052/pvc-dt65w is unused\nI0529 01:10:59.862787       1 pv_controller.go:638] volume \"local-g5n84\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:10:59.865477       1 pv_controller.go:864] volume \"local-g5n84\" entered phase \"Released\"\nI0529 01:10:59.962138       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-snapshotter-kzjb7\" objectUID=753f0dc0-b33f-49a5-9015-cbe1eeda6e65 kind=\"EndpointSlice\" virtual=false\nI0529 01:10:59.966337       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-snapshotter-kzjb7\" objectUID=753f0dc0-b33f-49a5-9015-cbe1eeda6e65 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:11:00.018857       1 pv_controller_base.go:504] deletion of claim \"provisioning-2052/pvc-dt65w\" was already processed\nI0529 01:11:00.093660       1 event.go:291] \"Event occurred\" object=\"volume-expand-5590/csi-hostpathls784\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5590\\\" or manually created by system administrator\"\nI0529 01:11:00.134266       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-snapshotter-567c74fd99\" objectUID=39c993aa-53f6-473f-8c9f-af220a983167 kind=\"ControllerRevision\" virtual=false\nI0529 01:11:00.134490       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-1961-4336/csi-hostpath-snapshotter\nI0529 01:11:00.134543       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1961-4336/csi-hostpath-snapshotter-0\" objectUID=56d97087-3ace-409a-b8ab-96e96d0f6063 kind=\"Pod\" virtual=false\nI0529 01:11:00.135906       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-snapshotter-567c74fd99\" objectUID=39c993aa-53f6-473f-8c9f-af220a983167 kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:11:00.136168       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1961-4336/csi-hostpath-snapshotter-0\" objectUID=56d97087-3ace-409a-b8ab-96e96d0f6063 kind=\"Pod\" propagationPolicy=Background\nI0529 01:11:00.462143       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.82.41).\nI0529 01:11:00.842760       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-0461f508f0acfa4d5\nI0529 01:11:00.910515       1 pv_controller.go:1652] volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" provisioned for claim \"topology-4290/pvc-9qk8f\"\nI0529 01:11:00.910717       1 event.go:291] \"Event occurred\" object=\"topology-4290/pvc-9qk8f\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044 using kubernetes.io/aws-ebs\"\nI0529 01:11:00.914466       1 pv_controller.go:864] volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" entered phase \"Bound\"\nI0529 01:11:00.914494       1 pv_controller.go:967] volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" bound to claim \"topology-4290/pvc-9qk8f\"\nI0529 01:11:00.920593       1 pv_controller.go:808] claim \"topology-4290/pvc-9qk8f\" entered phase \"Bound\"\nI0529 01:11:01.067417       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:01.256598       1 namespace_controller.go:185] Namespace has been deleted pvc-protection-1500\nI0529 01:11:01.462856       1 pv_controller.go:864] volume \"pvc-9005962a-d7b2-44ae-a9cd-a3b217182c83\" entered phase \"Bound\"\nI0529 01:11:01.462890       1 pv_controller.go:967] volume \"pvc-9005962a-d7b2-44ae-a9cd-a3b217182c83\" bound to claim \"volume-expand-5590/csi-hostpathls784\"\nI0529 01:11:01.478259       1 pv_controller.go:808] claim \"volume-expand-5590/csi-hostpathls784\" entered phase \"Bound\"\nI0529 01:11:01.861519       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0461f508f0acfa4d5\") from node \"ip-172-20-33-144.ap-northeast-2.compute.internal\" \nI0529 01:11:01.948798       1 aws.go:2014] Assigned mount device bo -> volume vol-0461f508f0acfa4d5\nE0529 01:11:02.092631       1 tokens_controller.go:262] error synchronizing serviceaccount subpath-37/default: secrets \"default-token-t5xqp\" is forbidden: unable to create new content in namespace subpath-37 because it is being terminated\nI0529 01:11:02.321883       1 aws.go:2427] AttachVolume volume=\"vol-0461f508f0acfa4d5\" instance=\"i-0a4e2805a6c116cdf\" request returned {\n  AttachTime: 2021-05-29 01:11:02.309 +0000 UTC,\n  Device: \"/dev/xvdbo\",\n  InstanceId: \"i-0a4e2805a6c116cdf\",\n  State: \"attaching\",\n  VolumeId: \"vol-0461f508f0acfa4d5\"\n}\nI0529 01:11:02.606442       1 pvc_protection_controller.go:291] PVC volume-7499/pvc-cb5q7 is unused\nI0529 01:11:02.615440       1 pv_controller.go:638] volume \"nfs-n5m44\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:11:02.618068       1 pv_controller.go:864] volume \"nfs-n5m44\" entered phase \"Released\"\nI0529 01:11:02.771735       1 pv_controller_base.go:504] deletion of claim \"volume-7499/pvc-cb5q7\" was already processed\nI0529 01:11:02.809714       1 garbagecollector.go:471] \"Processing object\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b\" objectUID=4403b589-f1c7-45eb-9c6b-7d9537586adc kind=\"ReplicaSet\" virtual=false\nI0529 01:11:02.809977       1 deployment_controller.go:581] Deployment mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72 has been deleted\nI0529 01:11:02.811675       1 garbagecollector.go:580] \"Deleting object\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8b\" objectUID=4403b589-f1c7-45eb-9c6b-7d9537586adc kind=\"ReplicaSet\" propagationPolicy=Background\nI0529 01:11:02.814252       1 garbagecollector.go:471] \"Processing object\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8bl7gsw\" objectUID=7b54194b-8c07-4a88-92a1-74f02d47a50c kind=\"Pod\" virtual=false\nI0529 01:11:02.816871       1 garbagecollector.go:580] \"Deleting object\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8bl7gsw\" objectUID=7b54194b-8c07-4a88-92a1-74f02d47a50c kind=\"Pod\" propagationPolicy=Background\nI0529 01:11:02.822069       1 garbagecollector.go:471] \"Processing object\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8bl7gsw\" objectUID=0bcc57e4-d330-434e-b3b2-53a891b6a6c6 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:11:02.826161       1 garbagecollector.go:580] \"Deleting object\" object=\"mounted-volume-expand-3933/deployment-39e19c20-a354-47a9-aaaa-87b4f66b4f72-85895cbb8bl7gsw\" objectUID=0bcc57e4-d330-434e-b3b2-53a891b6a6c6 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:11:02.909014       1 namespace_controller.go:185] Namespace has been deleted volume-expand-3599\nI0529 01:11:02.932704       1 namespace_controller.go:185] Namespace has been deleted provisioning-9428\nI0529 01:11:03.298303       1 pvc_protection_controller.go:291] PVC mounted-volume-expand-3933/pvc-n8rss is unused\nI0529 01:11:03.305789       1 pv_controller.go:638] volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" is released and reclaim policy \"Delete\" will be executed\nI0529 01:11:03.308516       1 pv_controller.go:864] volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" entered phase \"Released\"\nI0529 01:11:03.310100       1 pv_controller.go:1326] isVolumeReleased[pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840]: volume is released\nI0529 01:11:03.502396       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-0315911e9bd70dfc0: error deleting EBS volume \"vol-0315911e9bd70dfc0\" since volume is currently attached to \"i-0e1599e52cb362162\"\nE0529 01:11:03.502442       1 goroutinemap.go:150] Operation for \"delete-pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840[50c3ae0b-2611-442e-9048-8a60d7b01f63]\" failed. No retries permitted until 2021-05-29 01:11:04.002430076 +0000 UTC m=+996.168570487 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0315911e9bd70dfc0\\\" since volume is currently attached to \\\"i-0e1599e52cb362162\\\"\"\nI0529 01:11:03.502614       1 event.go:291] \"Event occurred\" object=\"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0315911e9bd70dfc0\\\" since volume is currently attached to \\\"i-0e1599e52cb362162\\\"\"\nI0529 01:11:03.667016       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:03.886811       1 garbagecollector.go:471] \"Processing object\" object=\"services-4561/externalsvc-j6ddd\" objectUID=5177302b-233b-45e4-9c70-e5ed890cc350 kind=\"Pod\" virtual=false\nI0529 01:11:03.887190       1 garbagecollector.go:471] \"Processing object\" object=\"services-4561/externalsvc-nnqh6\" objectUID=0df975a1-7f90-42c4-a636-cfdd14041970 kind=\"Pod\" virtual=false\nI0529 01:11:03.888844       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4561/externalsvc-j6ddd\" objectUID=5177302b-233b-45e4-9c70-e5ed890cc350 kind=\"Pod\" propagationPolicy=Background\nI0529 01:11:03.889083       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4561/externalsvc-nnqh6\" objectUID=0df975a1-7f90-42c4-a636-cfdd14041970 kind=\"Pod\" propagationPolicy=Background\nI0529 01:11:03.893056       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:11:03.901349       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nE0529 01:11:04.017423       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-8538/default: secrets \"default-token-vf595\" is forbidden: unable to create new content in namespace csi-mock-volumes-8538 because it is being terminated\nI0529 01:11:04.063438       1 namespace_controller.go:185] Namespace has been deleted replicaset-2372\nI0529 01:11:04.452504       1 aws.go:2037] Releasing in-process attachment entry: bo -> volume vol-0461f508f0acfa4d5\nI0529 01:11:04.452555       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0461f508f0acfa4d5\") from node \"ip-172-20-33-144.ap-northeast-2.compute.internal\" \nI0529 01:11:04.452802       1 event.go:291] \"Event occurred\" object=\"topology-4290/pod-6958614a-59d4-4947-ab8c-6df1fb91b25e\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\\\" \"\nI0529 01:11:04.673755       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:04.901435       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:11:05.300029       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4494/httpd\" objectUID=162b0077-728e-4038-a434-6a7e73c69b22 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:11:05.304757       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4494/httpd\" objectUID=162b0077-728e-4038-a434-6a7e73c69b22 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:11:06.185216       1 stateful_set_control.go:489] StatefulSet statefulset-2730/ss2 terminating Pod ss2-2 for scale down\nI0529 01:11:06.190161       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:06.190443       1 event.go:291] \"Event occurred\" object=\"statefulset-2730/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0529 01:11:06.324097       1 pv_controller.go:915] claim \"volume-1364/pvc-njgn5\" bound to volume \"local-5clnr\"\nI0529 01:11:06.326304       1 pv_controller.go:1326] isVolumeReleased[pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840]: volume is released\nI0529 01:11:06.332820       1 pv_controller.go:864] volume \"local-5clnr\" entered phase \"Bound\"\nI0529 01:11:06.332865       1 pv_controller.go:967] volume \"local-5clnr\" bound to claim \"volume-1364/pvc-njgn5\"\nI0529 01:11:06.338769       1 pv_controller.go:808] claim \"volume-1364/pvc-njgn5\" entered phase \"Bound\"\nI0529 01:11:06.430851       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-0315911e9bd70dfc0: error deleting EBS volume \"vol-0315911e9bd70dfc0\" since volume is currently attached to \"i-0e1599e52cb362162\"\nE0529 01:11:06.430910       1 goroutinemap.go:150] Operation for \"delete-pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840[50c3ae0b-2611-442e-9048-8a60d7b01f63]\" failed. No retries permitted until 2021-05-29 01:11:07.430892767 +0000 UTC m=+999.597033184 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0315911e9bd70dfc0\\\" since volume is currently attached to \\\"i-0e1599e52cb362162\\\"\"\nI0529 01:11:06.431065       1 event.go:291] \"Event occurred\" object=\"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0315911e9bd70dfc0\\\" since volume is currently attached to \\\"i-0e1599e52cb362162\\\"\"\nI0529 01:11:06.843141       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:07.124497       1 namespace_controller.go:185] Namespace has been deleted subpath-37\nI0529 01:11:07.720418       1 garbagecollector.go:471] \"Processing object\" object=\"container-runtime-7032/image-pull-test023244f9-3ff8-42ce-a0f1-d554ce10588a\" objectUID=f4597b22-dc36-448b-9918-53c7ea296cb8 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:11:08.069505       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.203.108).\nI0529 01:11:08.321037       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6181/test-rolling-update-with-lb-5b74d4d4b5\" need=3 creating=3\nI0529 01:11:08.321170       1 event.go:291] \"Event occurred\" object=\"deployment-6181/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5b74d4d4b5 to 3\"\nI0529 01:11:08.336818       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6181/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0529 01:11:08.337735       1 event.go:291] \"Event occurred\" object=\"deployment-6181/test-rolling-update-with-lb-5b74d4d4b5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5b74d4d4b5-vdfc9\"\nI0529 01:11:08.347362       1 event.go:291] \"Event occurred\" object=\"deployment-6181/test-rolling-update-with-lb-5b74d4d4b5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5b74d4d4b5-8c2tb\"\nI0529 01:11:08.360615       1 event.go:291] \"Event occurred\" object=\"deployment-6181/test-rolling-update-with-lb-5b74d4d4b5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5b74d4d4b5-mfshb\"\nI0529 01:11:08.440291       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:08.447736       1 stateful_set_control.go:489] StatefulSet statefulset-2730/ss2 terminating Pod ss2-1 for scale down\nI0529 01:11:08.448213       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:08.451699       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:08.452223       1 event.go:291] \"Event occurred\" object=\"statefulset-2730/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0529 01:11:08.467220       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.239.223).\nI0529 01:11:08.541161       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6501/e2e-test-webhook-sq4ts\" objectUID=a2eab6a6-6b59-445d-a7bf-3b48b71735df kind=\"EndpointSlice\" virtual=false\nI0529 01:11:08.545028       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6501/e2e-test-webhook-sq4ts\" objectUID=a2eab6a6-6b59-445d-a7bf-3b48b71735df kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:11:08.717546       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6501/sample-webhook-deployment-6bd9446d55\" objectUID=b1f73379-a8c3-49e5-80da-d3918b7279fb kind=\"ReplicaSet\" virtual=false\nI0529 01:11:08.717794       1 deployment_controller.go:581] Deployment webhook-6501/sample-webhook-deployment has been deleted\nI0529 01:11:08.726008       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6501/sample-webhook-deployment-6bd9446d55\" objectUID=b1f73379-a8c3-49e5-80da-d3918b7279fb kind=\"ReplicaSet\" propagationPolicy=Background\nI0529 01:11:08.731408       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6501/sample-webhook-deployment-6bd9446d55-lzh2x\" objectUID=1def3359-cc7c-48cb-9da2-4e4ffa99e690 kind=\"Pod\" virtual=false\nI0529 01:11:08.732921       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6501/sample-webhook-deployment-6bd9446d55-lzh2x\" objectUID=1def3359-cc7c-48cb-9da2-4e4ffa99e690 kind=\"Pod\" propagationPolicy=Background\nI0529 01:11:08.746356       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6501/sample-webhook-deployment-6bd9446d55-lzh2x\" objectUID=bcad93f7-2c9e-4a4c-9a6a-17cd7cb13294 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:11:08.749150       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6501/sample-webhook-deployment-6bd9446d55-lzh2x\" objectUID=bcad93f7-2c9e-4a4c-9a6a-17cd7cb13294 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:11:08.867186       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.82.41).\nI0529 01:11:09.006114       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0315911e9bd70dfc0\") on node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:11:09.008450       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0315911e9bd70dfc0\") on node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nE0529 01:11:09.038690       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:11:09.100514       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8538\nE0529 01:11:09.187347       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2052/default: secrets \"default-token-k4x6s\" is forbidden: unable to create new content in namespace provisioning-2052 because it is being terminated\nI0529 01:11:09.196511       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:11:09.390426       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:11:09.396997       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:11:09.458447       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:11:09.524211       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:11:09.873599       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.82.41).\nE0529 01:11:10.024351       1 pv_controller.go:1437] error finding provisioning plugin for claim volumemode-5264/pvc-fm8tp: storageclass.storage.k8s.io \"volumemode-5264\" not found\nI0529 01:11:10.024616       1 event.go:291] \"Event occurred\" object=\"volumemode-5264/pvc-fm8tp\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-5264\\\" not found\"\nI0529 01:11:10.068542       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.231.222).\nI0529 01:11:10.079720       1 namespace_controller.go:185] Namespace has been deleted projected-6664\nI0529 01:11:10.086172       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-6326/pod-f5f0c1de-d54b-4127-9217-99bb28ed0c5d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-cpbkx pvc- persistent-local-volumes-test-6326  5bc10d25-cf22-4f08-9eaa-753ebb978a0c 36696 0 2021-05-29 01:10:52 +0000 UTC 2021-05-29 01:11:10 +0000 UTC 0xc00381f738 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvx4n6p,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-6326,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0529 01:11:10.086258       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-6326/pvc-cpbkx because it is still being used\nI0529 01:11:10.186784       1 pv_controller.go:864] volume \"local-4tbkq\" entered phase \"Available\"\nI0529 01:11:10.298042       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8538-3830/csi-mockplugin-8469b6dfb7\" objectUID=a2981b76-7756-40cd-b3ae-938e9cd7347c kind=\"ControllerRevision\" virtual=false\nI0529 01:11:10.298290       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-8538-3830/csi-mockplugin\nI0529 01:11:10.298333       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8538-3830/csi-mockplugin-0\" objectUID=b946dee7-76d8-43f3-924f-3e5c6cbc2fa0 kind=\"Pod\" virtual=false\nI0529 01:11:10.299753       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8538-3830/csi-mockplugin-8469b6dfb7\" objectUID=a2981b76-7756-40cd-b3ae-938e9cd7347c kind=\"ControllerRevision\" propagationPolicy=Background\nI0529 01:11:10.300035       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8538-3830/csi-mockplugin-0\" objectUID=b946dee7-76d8-43f3-924f-3e5c6cbc2fa0 kind=\"Pod\" propagationPolicy=Background\nE0529 01:11:10.416820       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:11:10.475912       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.90.209).\nI0529 01:11:10.502996       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-3049/liveness-f1defa6c-8112-44e0-b968-1c921b3ee6ba\" objectUID=293b7cf5-3395-4006-aeb9-89437c396b25 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:11:10.509718       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-3049/liveness-f1defa6c-8112-44e0-b968-1c921b3ee6ba\" objectUID=293b7cf5-3395-4006-aeb9-89437c396b25 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:11:10.865840       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:11:10.915776       1 stateful_set_control.go:523] StatefulSet statefulset-3959/ss2 terminating Pod ss2-1 for update\nI0529 01:11:10.919623       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:10.920347       1 event.go:291] \"Event occurred\" object=\"statefulset-3959/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0529 01:11:11.065219       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:11:11.070759       1 utils.go:413] couldn't find ipfamilies for headless service: services-4561/externalsvc. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.168.56).\nI0529 01:11:11.076681       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5590-693/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.231.222).\nI0529 01:11:11.251371       1 garbagecollector.go:471] \"Processing object\" object=\"services-4561/externalsvc-t7vc4\" objectUID=b8def343-940e-4da8-9861-b5bcdc3db712 kind=\"EndpointSlice\" virtual=false\nI0529 01:11:11.256090       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4561/externalsvc-t7vc4\" objectUID=b8def343-940e-4da8-9861-b5bcdc3db712 kind=\"EndpointSlice\" propagationPolicy=Background\nE0529 01:11:11.273552       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:11:11.429029       1 garbagecollector.go:471] \"Processing object\" object=\"services-4561/clusterip-service-bl6gm\" objectUID=c3a4e060-343d-4b0c-b6aa-a34cc84a8f95 kind=\"EndpointSlice\" virtual=false\nI0529 01:11:11.429534       1 garbagecollector.go:471] \"Processing object\" object=\"services-4561/clusterip-service-4tfp2\" objectUID=75954cfe-90a6-4dd1-b91b-1b7c281a5aeb kind=\"EndpointSlice\" virtual=false\nI0529 01:11:11.431288       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4561/clusterip-service-bl6gm\" objectUID=c3a4e060-343d-4b0c-b6aa-a34cc84a8f95 kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:11:11.433117       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4561/clusterip-service-4tfp2\" objectUID=75954cfe-90a6-4dd1-b91b-1b7c281a5aeb kind=\"EndpointSlice\" propagationPolicy=Background\nI0529 01:11:12.470139       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:12.595733       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-6326/pod-f5f0c1de-d54b-4127-9217-99bb28ed0c5d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-cpbkx pvc- persistent-local-volumes-test-6326  5bc10d25-cf22-4f08-9eaa-753ebb978a0c 36696 0 2021-05-29 01:10:52 +0000 UTC 2021-05-29 01:11:10 +0000 UTC 0xc00381f738 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvx4n6p,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-6326,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0529 01:11:12.595872       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-6326/pvc-cpbkx because it is still being used\nI0529 01:11:12.598574       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-6326/pod-06251082-2ec6-40af-ab29-f78975faa2c8 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-cpbkx pvc- persistent-local-volumes-test-6326  5bc10d25-cf22-4f08-9eaa-753ebb978a0c 36696 0 2021-05-29 01:10:52 +0000 UTC 2021-05-29 01:11:10 +0000 UTC 0xc00381f738 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvx4n6p,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-6326,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0529 01:11:12.598644       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-6326/pvc-cpbkx because it is still being used\nI0529 01:11:12.778230       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:12.867118       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:13.067666       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:13.091582       1 stateful_set_control.go:489] StatefulSet statefulset-2730/ss2 terminating Pod ss2-0 for scale down\nI0529 01:11:13.091943       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:13.108117       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:13.108303       1 event.go:291] \"Event occurred\" object=\"statefulset-2730/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nE0529 01:11:13.236850       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-6501/default: secrets \"default-token-62xzj\" is forbidden: unable to create new content in namespace webhook-6501 because it is being terminated\nI0529 01:11:13.297105       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:13.297327       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0529 01:11:13.315317       1 namespace_controller.go:185] Namespace has been deleted mounted-volume-expand-3933\nE0529 01:11:13.402209       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-6501-markers/default: secrets \"default-token-zv54z\" is forbidden: unable to create new content in namespace webhook-6501-markers because it is being terminated\nI0529 01:11:13.591501       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-6326/pod-06251082-2ec6-40af-ab29-f78975faa2c8 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-cpbkx pvc- persistent-local-volumes-test-6326  5bc10d25-cf22-4f08-9eaa-753ebb978a0c 36696 0 2021-05-29 01:10:52 +0000 UTC 2021-05-29 01:11:10 +0000 UTC 0xc00381f738 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvx4n6p,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-6326,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0529 01:11:13.591582       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-6326/pvc-cpbkx because it is still being used\nI0529 01:11:13.792942       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:13.892923       1 event.go:291] \"Event occurred\" object=\"volume-2880/aws9zzpr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0529 01:11:14.125491       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:14.266239       1 namespace_controller.go:185] Namespace has been deleted provisioning-2052\nI0529 01:11:14.441779       1 aws.go:2291] Waiting for volume \"vol-0315911e9bd70dfc0\" state: actual=detaching, desired=detached\nI0529 01:11:14.442094       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:14.590444       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-6326/pod-06251082-2ec6-40af-ab29-f78975faa2c8 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-cpbkx pvc- persistent-local-volumes-test-6326  5bc10d25-cf22-4f08-9eaa-753ebb978a0c 36696 0 2021-05-29 01:10:52 +0000 UTC 2021-05-29 01:11:10 +0000 UTC 0xc00381f738 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-29 01:10:52 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvx4n6p,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-6326,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0529 01:11:14.590535       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-6326/pvc-cpbkx because it is still being used\nI0529 01:11:14.597302       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-6326/pvc-cpbkx is unused\nI0529 01:11:14.602601       1 pv_controller.go:638] volume \"local-pvx4n6p\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:11:14.605035       1 pv_controller.go:864] volume \"local-pvx4n6p\" entered phase \"Released\"\nI0529 01:11:14.609008       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-6326/pvc-cpbkx\" was already processed\nI0529 01:11:14.996426       1 controller.go:368] Ensuring load balancer for service deployment-6181/test-rolling-update-with-lb\nI0529 01:11:14.996458       1 controller.go:853] Adding finalizer to service deployment-6181/test-rolling-update-with-lb\nI0529 01:11:14.996665       1 utils.go:413] couldn't find ipfamilies for headless service: deployment-6181/test-rolling-update-with-lb. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.103.10).\nI0529 01:11:14.998275       1 event.go:291] \"Event occurred\" object=\"deployment-6181/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI0529 01:11:15.032415       1 utils.go:413] couldn't find ipfamilies for headless service: deployment-6181/test-rolling-update-with-lb. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.103.10).\nI0529 01:11:15.033362       1 aws.go:3893] EnsureLoadBalancer(e2e-d54a8cb310-f3fa8.test-cncf-aws.k8s.io, deployment-6181, test-rolling-update-with-lb, ap-northeast-2, , [{ TCP <nil> 80 {0 80 } 31592}], map[])\nI0529 01:11:15.504875       1 pvc_protection_controller.go:291] PVC volume-3109/pvc-pk2nk is unused\nI0529 01:11:15.511147       1 pv_controller.go:638] volume \"nfs-pmj6c\" is released and reclaim policy \"Retain\" will be executed\nI0529 01:11:15.513455       1 pv_controller.go:864] volume \"nfs-pmj6c\" entered phase \"Released\"\nI0529 01:11:15.671667       1 pv_controller_base.go:504] deletion of claim \"volume-3109/pvc-pk2nk\" was already processed\nI0529 01:11:15.731801       1 aws.go:3114] Existing security group ingress: sg-02f6fb02840deecf7 []\nI0529 01:11:15.731870       1 aws.go:3145] Adding security group ingress: sg-02f6fb02840deecf7 [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI0529 01:11:15.898578       1 aws_loadbalancer.go:972] Creating load balancer for deployment-6181/test-rolling-update-with-lb with name: a4c3b3e0188184bc3846440631c5a65d\nI0529 01:11:16.009426       1 utils.go:413] couldn't find ipfamilies for headless service: deployment-6181/test-rolling-update-with-lb. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.103.10).\nI0529 01:11:16.011781       1 namespace_controller.go:185] Namespace has been deleted volume-expand-1961-4336\nI0529 01:11:16.382621       1 aws_loadbalancer.go:1175] Updating load-balancer attributes for \"a4c3b3e0188184bc3846440631c5a65d\"\nE0529 01:11:16.412518       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:11:16.512645       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-29 01:10:38 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdch\",\n  InstanceId: \"i-0e1599e52cb362162\",\n  State: \"detaching\",\n  VolumeId: \"vol-0315911e9bd70dfc0\"\n}\nI0529 01:11:16.512692       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0315911e9bd70dfc0\") on node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:11:16.593041       1 aws.go:4512] Adding rule for traffic from the load balancer (sg-02f6fb02840deecf7) to instances (sg-03114e0cc8cc5fd4e)\nI0529 01:11:16.652795       1 aws.go:3189] Existing security group ingress: sg-03114e0cc8cc5fd4e [{\n  FromPort: 30000,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 32767\n} {\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-03114e0cc8cc5fd4e\",\n      UserId: \"768319786644\"\n    },{\n      GroupId: \"sg-0bc30172f69ae85f9\",\n      UserId: \"768319786644\"\n    }]\n} {\n  FromPort: 22,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"34.69.7.130/32\"\n    }],\n  ToPort: 22\n} {\n  FromPort: 30000,\n  IpProtocol: \"udp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 32767\n}]\nI0529 01:11:16.652899       1 aws.go:3086] Comparing sg-02f6fb02840deecf7 to sg-03114e0cc8cc5fd4e\nI0529 01:11:16.652906       1 aws.go:3086] Comparing sg-02f6fb02840deecf7 to sg-0bc30172f69ae85f9\nI0529 01:11:16.652912       1 aws.go:3217] Adding security group ingress: sg-03114e0cc8cc5fd4e [{\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-02f6fb02840deecf7\"\n    }]\n}]\nI0529 01:11:16.735742       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-tfz7k\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0ca935930550a37df\") on node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" \nI0529 01:11:16.739895       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"aws-tfz7k\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0ca935930550a37df\") on node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" \nI0529 01:11:16.784155       1 garbagecollector.go:471] \"Processing object\" object=\"services-4561/execpodptpkp\" objectUID=cce5990e-885e-4f6c-ab0d-b399521a0309 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:11:16.791162       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4561/execpodptpkp\" objectUID=cce5990e-885e-4f6c-ab0d-b399521a0309 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0529 01:11:16.939566       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:11:16.939629       1 aws_loadbalancer.go:1423] Instances added to load-balancer a4c3b3e0188184bc3846440631c5a65d\nI0529 01:11:16.939645       1 aws.go:4278] Loadbalancer a4c3b3e0188184bc3846440631c5a65d (deployment-6181/test-rolling-update-with-lb) has DNS name a4c3b3e0188184bc3846440631c5a65d-1598135803.ap-northeast-2.elb.amazonaws.com\nI0529 01:11:16.939688       1 controller.go:894] Patching status for service deployment-6181/test-rolling-update-with-lb\nI0529 01:11:16.940132       1 event.go:291] \"Event occurred\" object=\"deployment-6181/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuredLoadBalancer\" message=\"Ensured load balancer\"\nI0529 01:11:16.958530       1 utils.go:413] couldn't find ipfamilies for headless service: deployment-6181/test-rolling-update-with-lb. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.103.10).\nE0529 01:11:17.154469       1 tokens_controller.go:262] error synchronizing serviceaccount services-4561/default: secrets \"default-token-ckfjl\" is forbidden: unable to create new content in namespace services-4561 because it is being terminated\nI0529 01:11:17.192862       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-552/ss is recreating failed Pod ss-0\"\nI0529 01:11:17.196822       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:17.208774       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:17.210953       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0529 01:11:17.222303       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0529 01:11:17.222428       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:17.295923       1 namespace_controller.go:185] Namespace has been deleted kubectl-4494\nE0529 01:11:17.300979       1 tokens_controller.go:262] error synchronizing serviceaccount svcaccounts-2899/default: secrets \"default-token-cpcp7\" is forbidden: unable to create new content in namespace svcaccounts-2899 because it is being terminated\nI0529 01:11:17.408623       1 event.go:291] \"Event occurred\" object=\"provisioning-1312/awsrhr8d\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0529 01:11:17.592208       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:17.724647       1 namespace_controller.go:185] Namespace has been deleted downward-api-6766\nI0529 01:11:17.799821       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:17.828730       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-2730/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:18.215606       1 namespace_controller.go:185] Namespace has been deleted container-runtime-7032\nI0529 01:11:18.351004       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-8802/busybox-82d9a9a1-3e33-42f7-a5ee-2f0b51f2ed78\" objectUID=d2f603a0-18fa-4177-a1d6-8bc706f80ee2 kind=\"CiliumEndpoint\" virtual=false\nI0529 01:11:18.353470       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-8802/busybox-82d9a9a1-3e33-42f7-a5ee-2f0b51f2ed78\" objectUID=d2f603a0-18fa-4177-a1d6-8bc706f80ee2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:11:18.365894       1 namespace_controller.go:185] Namespace has been deleted webhook-6501\nI0529 01:11:18.492874       1 namespace_controller.go:185] Namespace has been deleted webhook-6501-markers\nI0529 01:11:19.456822       1 event.go:291] \"Event occurred\" object=\"kubectl-8373/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-86bff9b6d7 to 1\"\nI0529 01:11:19.457019       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-8373/httpd-deployment-86bff9b6d7\" need=1 creating=1\nI0529 01:11:19.463821       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-8373/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0529 01:11:19.467394       1 event.go:291] \"Event occurred\" object=\"kubectl-8373/httpd-deployment-86bff9b6d7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-86bff9b6d7-qkrds\"\nI0529 01:11:19.564204       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-029fd32f449f4aaad\nI0529 01:11:19.626527       1 pv_controller.go:1652] volume \"pvc-d1af65f1-aae8-43fc-a29d-5c2a05d5c842\" provisioned for claim \"volume-2880/aws9zzpr\"\nI0529 01:11:19.626698       1 event.go:291] \"Event occurred\" object=\"volume-2880/aws9zzpr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-d1af65f1-aae8-43fc-a29d-5c2a05d5c842 using kubernetes.io/aws-ebs\"\nI0529 01:11:19.633109       1 pv_controller.go:864] volume \"pvc-d1af65f1-aae8-43fc-a29d-5c2a05d5c842\" entered phase \"Bound\"\nI0529 01:11:19.633137       1 pv_controller.go:967] volume \"pvc-d1af65f1-aae8-43fc-a29d-5c2a05d5c842\" bound to claim \"volume-2880/aws9zzpr\"\nI0529 01:11:19.644676       1 pv_controller.go:808] claim \"volume-2880/aws9zzpr\" entered phase \"Bound\"\nI0529 01:11:20.245326       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-d1af65f1-aae8-43fc-a29d-5c2a05d5c842\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-029fd32f449f4aaad\") from node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" \nI0529 01:11:20.286399       1 aws.go:2014] Assigned mount device bi -> volume vol-029fd32f449f4aaad\nI0529 01:11:20.349739       1 namespace_controller.go:185] Namespace has been deleted volume-2503\nI0529 01:11:20.663738       1 aws.go:2427] AttachVolume volume=\"vol-029fd32f449f4aaad\" instance=\"i-01b30dd3a1104aa2c\" request returned {\n  AttachTime: 2021-05-29 01:11:20.65 +0000 UTC,\n  Device: \"/dev/xvdbi\",\n  InstanceId: \"i-01b30dd3a1104aa2c\",\n  State: \"attaching\",\n  VolumeId: \"vol-029fd32f449f4aaad\"\n}\nI0529 01:11:20.769317       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8538-3830\nI0529 01:11:20.947459       1 namespace_controller.go:185] Namespace has been deleted container-probe-3049\nI0529 01:11:21.066930       1 pvc_protection_controller.go:291] PVC topology-4290/pvc-9qk8f is unused\nI0529 01:11:21.077335       1 pv_controller.go:638] volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" is released and reclaim policy \"Delete\" will be executed\nI0529 01:11:21.080612       1 pv_controller.go:864] volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" entered phase \"Released\"\nI0529 01:11:21.081983       1 pv_controller.go:1326] isVolumeReleased[pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044]: volume is released\nI0529 01:11:21.246719       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-0461f508f0acfa4d5: error deleting EBS volume \"vol-0461f508f0acfa4d5\" since volume is currently attached to \"i-0a4e2805a6c116cdf\"\nE0529 01:11:21.246771       1 goroutinemap.go:150] Operation for \"delete-pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044[daa6b534-8351-4306-b95a-87e4c486eace]\" failed. No retries permitted until 2021-05-29 01:11:21.74675378 +0000 UTC m=+1013.912894197 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0461f508f0acfa4d5\\\" since volume is currently attached to \\\"i-0a4e2805a6c116cdf\\\"\"\nI0529 01:11:21.247005       1 event.go:291] \"Event occurred\" object=\"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0461f508f0acfa4d5\\\" since volume is currently attached to \\\"i-0a4e2805a6c116cdf\\\"\"\nI0529 01:11:21.294461       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-8373/httpd-deployment-86bff9b6d7\" objectUID=5d46bb3d-9eb9-482f-b5d3-fab5a07b82b9 kind=\"ReplicaSet\" virtual=false\nI0529 01:11:21.294667       1 deployment_controller.go:581] Deployment kubectl-8373/httpd-deployment has been deleted\nI0529 01:11:21.298431       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-8373/httpd-deployment-86bff9b6d7\" objectUID=5d46bb3d-9eb9-482f-b5d3-fab5a07b82b9 kind=\"ReplicaSet\" propagationPolicy=Background\nI0529 01:11:21.305596       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-8373/httpd-deployment-86bff9b6d7-qkrds\" objectUID=fc4da361-4273-45f9-bbc8-52e658b35854 kind=\"Pod\" virtual=false\nI0529 01:11:21.307962       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-8373/httpd-deployment-86bff9b6d7-qkrds\" objectUID=fc4da361-4273-45f9-bbc8-52e658b35854 kind=\"Pod\" propagationPolicy=Background\nI0529 01:11:21.324388       1 pv_controller.go:915] claim \"volumemode-5264/pvc-fm8tp\" bound to volume \"local-4tbkq\"\nE0529 01:11:21.329165       1 tokens_controller.go:262] error synchronizing serviceaccount volume-7499/default: secrets \"default-token-bl66j\" is forbidden: unable to create new content in namespace volume-7499 because it is being terminated\nI0529 01:11:21.331803       1 pv_controller.go:1326] isVolumeReleased[pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840]: volume is released\nI0529 01:11:21.332609       1 pv_controller.go:864] volume \"local-4tbkq\" entered phase \"Bound\"\nI0529 01:11:21.332630       1 pv_controller.go:967] volume \"local-4tbkq\" bound to claim \"volumemode-5264/pvc-fm8tp\"\nI0529 01:11:21.340492       1 pv_controller.go:808] claim \"volumemode-5264/pvc-fm8tp\" entered phase \"Bound\"\nI0529 01:11:21.501301       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-northeast-2a/vol-0315911e9bd70dfc0\nI0529 01:11:21.501339       1 pv_controller.go:1421] volume \"pvc-ce34a81e-ac6e-46d4-b0f8-2ffffd5f0840\" deleted\nI0529 01:11:21.510615       1 pv_controller_base.go:504] deletion of claim \"mounted-volume-expand-3933/pvc-n8rss\" was already processed\nI0529 01:11:21.791705       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-552/ss is recreating failed Pod ss-0\"\nI0529 01:11:21.797131       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:21.799766       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0529 01:11:21.806665       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:21.808002       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0529 01:11:21.970756       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-552/test-pod\" objectUID=d82b7128-8d43-4627-988b-8943816c75bc kind=\"CiliumEndpoint\" virtual=false\nI0529 01:11:21.980571       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-552/test-pod\" objectUID=d82b7128-8d43-4627-988b-8943816c75bc kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0529 01:11:22.181294       1 aws.go:2291] Waiting for volume \"vol-0ca935930550a37df\" state: actual=detaching, desired=detached\nI0529 01:11:22.277485       1 namespace_controller.go:185] Namespace has been deleted services-4561\nI0529 01:11:22.283961       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-6326\nI0529 01:11:22.356978       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-2899\nI0529 01:11:22.765593       1 aws.go:2037] Releasing in-process attachment entry: bi -> volume vol-029fd32f449f4aaad\nI0529 01:11:22.765647       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-d1af65f1-aae8-43fc-a29d-5c2a05d5c842\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-029fd32f449f4aaad\") from node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" \nI0529 01:11:22.765923       1 event.go:291] \"Event occurred\" object=\"volume-2880/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-d1af65f1-aae8-43fc-a29d-5c2a05d5c842\\\" \"\nI0529 01:11:23.068856       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-02a55b098b963dd21\nI0529 01:11:23.114923       1 pv_controller.go:1652] volume \"pvc-e3fdfa69-a0f5-494c-ac23-c7631bb7655b\" provisioned for claim \"provisioning-1312/awsrhr8d\"\nI0529 01:11:23.115101       1 event.go:291] \"Event occurred\" object=\"provisioning-1312/awsrhr8d\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-e3fdfa69-a0f5-494c-ac23-c7631bb7655b using kubernetes.io/aws-ebs\"\nI0529 01:11:23.118872       1 pv_controller.go:864] volume \"pvc-e3fdfa69-a0f5-494c-ac23-c7631bb7655b\" entered phase \"Bound\"\nI0529 01:11:23.118924       1 pv_controller.go:967] volume \"pvc-e3fdfa69-a0f5-494c-ac23-c7631bb7655b\" bound to claim \"provisioning-1312/awsrhr8d\"\nI0529 01:11:23.124094       1 pv_controller.go:808] claim \"provisioning-1312/awsrhr8d\" entered phase \"Bound\"\nI0529 01:11:23.392104       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-552/ss is recreating failed Pod ss-0\"\nI0529 01:11:23.396809       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:23.405254       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:23.405730       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0529 01:11:23.411661       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE0529 01:11:23.414196       1 stateful_set.go:392] error syncing StatefulSet statefulset-552/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0529 01:11:23.415690       1 event.go:291] \"Event occurred\" object=\"statefulset-552/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nI0529 01:11:23.417596       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-552/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nE0529 01:11:23.591907       1 tokens_controller.go:262] error synchronizing serviceaccount services-5766/default: secrets \"default-token-7bmwt\" is forbidden: unable to create new content in namespace services-5766 because it is being terminated\nE0529 01:11:23.702135       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-8802/default: secrets \"default-token-lsb62\" is forbidden: unable to create new content in namespace container-probe-8802 because it is being terminated\nI0529 01:11:23.759804       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e3fdfa69-a0f5-494c-ac23-c7631bb7655b\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-02a55b098b963dd21\") from node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:11:23.836464       1 aws.go:2014] Assigned mount device bw -> volume vol-02a55b098b963dd21\nI0529 01:11:23.850180       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:23.859696       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:23.870210       1 utils.go:424] couldn't find ipfamilies for headless service: statefulset-3959/test likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0529 01:11:23.870777       1 event.go:291] \"Event occurred\" object=\"statefulset-3959/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE0529 01:11:23.902998       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-3184/default: secrets \"default-token-k9zch\" is forbidden: unable to create new content in namespace secrets-3184 because it is being terminated\nI0529 01:11:24.000334       1 deployment_controller.go:581] Deployment webhook-4175/sample-webhook-deployment has been deleted\nI0529 01:11:24.187560       1 aws.go:2427] AttachVolume volume=\"vol-02a55b098b963dd21\" instance=\"i-0e1599e52cb362162\" request returned {\n  AttachTime: 2021-05-29 01:11:24.177 +0000 UTC,\n  Device: \"/dev/xvdbw\",\n  InstanceId: \"i-0e1599e52cb362162\",\n  State: \"attaching\",\n  VolumeId: \"vol-02a55b098b963dd21\"\n}\nI0529 01:11:24.250347       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-29 01:10:52 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcb\",\n  InstanceId: \"i-01b30dd3a1104aa2c\",\n  State: \"detaching\",\n  VolumeId: \"vol-0ca935930550a37df\"\n}\nI0529 01:11:24.250400       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"aws-tfz7k\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0ca935930550a37df\") on node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" \nI0529 01:11:24.260454       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-tfz7k\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0ca935930550a37df\") from node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" \nI0529 01:11:24.327728       1 aws.go:2014] Assigned mount device bj -> volume vol-0ca935930550a37df\nE0529 01:11:24.363812       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0529 01:11:24.471183       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0461f508f0acfa4d5\") on node \"ip-172-20-33-144.ap-northeast-2.compute.internal\" \nI0529 01:11:24.473555       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-5d48d6eb-b103-496e-9dd9-43cb1dddb044\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0461f508f0acfa4d5\") on node \"ip-172-20-33-144.ap-northeast-2.compute.internal\" \nI0529 01:11:24.706306       1 aws.go:2427] AttachVolume volume=\"vol-0ca935930550a37df\" instance=\"i-01b30dd3a1104aa2c\" request returned {\n  AttachTime: 2021-05-29 01:11:24.695 +0000 UTC,\n  Device: \"/dev/xvdbj\",\n  InstanceId: \"i-01b30dd3a1104aa2c\",\n  State: \"attaching\",\n  VolumeId: \"vol-0ca935930550a37df\"\n}\nE0529 01:11:24.934979       1 tokens_controller.go:262] error synchronizing serviceaccount multi-az-4533/default: secrets \"default-token-ccqd9\" is forbidden: unable to create new content in namespace multi-az-4533 because it is being terminated\nI0529 01:11:26.090291       1 namespace_controller.go:185] Namespace has been deleted clientset-2269\nI0529 01:11:26.311533       1 aws.go:2037] Releasing in-process attachment entry: bw -> volume vol-02a55b098b963dd21\nI0529 01:11:26.311585       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-e3fdfa69-a0f5-494c-ac23-c7631bb7655b\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-02a55b098b963dd21\") from node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" \nI0529 01:11:26.311883       1 event.go:291] \"Event occurred\" object=\"provisioning-1312/pod-subpath-test-dynamicpv-czch\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e3fdfa69-a0f5-494c-ac23-c7631bb7655b\\\" \"\nI0529 01:11:26.398957       1 namespace_controller.go:185] Namespace has been deleted volume-7499\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-36-113.ap-northeast-2.compute.internal ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-36-113.ap-northeast-2.compute.internal ====\nI0529 00:54:27.849540       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0529 00:54:27.850013       1 flags.go:59] FLAG: --address=\"0.0.0.0\"\nI0529 00:54:27.850026       1 flags.go:59] FLAG: --algorithm-provider=\"\"\nI0529 00:54:27.850030       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0529 00:54:27.850035       1 flags.go:59] FLAG: --authentication-kubeconfig=\"\"\nI0529 00:54:27.850039       1 flags.go:59] FLAG: --authentication-skip-lookup=\"false\"\nI0529 00:54:27.850046       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0529 00:54:27.850051       1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure=\"true\"\nI0529 00:54:27.850058       1 flags.go:59] FLAG: --authorization-always-allow-paths=\"[/healthz]\"\nI0529 00:54:27.850066       1 flags.go:59] FLAG: --authorization-kubeconfig=\"\"\nI0529 00:54:27.850070       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0529 00:54:27.850074       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0529 00:54:27.850079       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0529 00:54:27.850085       1 flags.go:59] FLAG: --cert-dir=\"\"\nI0529 00:54:27.850089       1 flags.go:59] FLAG: --client-ca-file=\"\"\nI0529 00:54:27.850093       1 flags.go:59] FLAG: --config=\"/var/lib/kube-scheduler/config.yaml\"\nI0529 00:54:27.850098       1 flags.go:59] FLAG: --contention-profiling=\"true\"\nI0529 00:54:27.850103       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0529 00:54:27.850107       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0529 00:54:27.850114       1 flags.go:59] FLAG: --hard-pod-affinity-symmetric-weight=\"1\"\nI0529 00:54:27.850121       1 flags.go:59] FLAG: --help=\"false\"\nI0529 00:54:27.850125       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0529 00:54:27.850131       1 flags.go:59] FLAG: --kube-api-burst=\"100\"\nI0529 00:54:27.850136       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0529 00:54:27.850141       1 flags.go:59] FLAG: --kube-api-qps=\"50\"\nI0529 00:54:27.850147       1 flags.go:59] FLAG: --kubeconfig=\"\"\nI0529 00:54:27.850151       1 flags.go:59] FLAG: --leader-elect=\"true\"\nI0529 00:54:27.850155       1 flags.go:59] FLAG: --leader-elect-lease-duration=\"15s\"\nI0529 00:54:27.850160       1 flags.go:59] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0529 00:54:27.850164       1 flags.go:59] FLAG: --leader-elect-resource-lock=\"leases\"\nI0529 00:54:27.850169       1 flags.go:59] FLAG: --leader-elect-resource-name=\"kube-scheduler\"\nI0529 00:54:27.850173       1 flags.go:59] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0529 00:54:27.850178       1 flags.go:59] FLAG: --leader-elect-retry-period=\"2s\"\nI0529 00:54:27.850182       1 flags.go:59] FLAG: --lock-object-name=\"kube-scheduler\"\nI0529 00:54:27.850186       1 flags.go:59] FLAG: --lock-object-namespace=\"kube-system\"\nI0529 00:54:27.850190       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0529 00:54:27.850198       1 flags.go:59] FLAG: --log-dir=\"\"\nI0529 00:54:27.850202       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-scheduler.log\"\nI0529 00:54:27.850207       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0529 00:54:27.850212       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0529 00:54:27.850216       1 flags.go:59] FLAG: --logging-format=\"text\"\nI0529 00:54:27.850220       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0529 00:54:27.850225       1 flags.go:59] FLAG: --master=\"\"\nI0529 00:54:27.850229       1 flags.go:59] FLAG: --one-output=\"false\"\nI0529 00:54:27.850234       1 flags.go:59] FLAG: --permit-port-sharing=\"false\"\nI0529 00:54:27.850239       1 flags.go:59] FLAG: --policy-config-file=\"\"\nI0529 00:54:27.850243       1 flags.go:59] FLAG: --policy-configmap=\"\"\nI0529 00:54:27.850249       1 flags.go:59] FLAG: --policy-configmap-namespace=\"kube-system\"\nI0529 00:54:27.850254       1 flags.go:59] FLAG: --port=\"10251\"\nI0529 00:54:27.850259       1 flags.go:59] FLAG: --profiling=\"true\"\nI0529 00:54:27.850263       1 flags.go:59] FLAG: --requestheader-allowed-names=\"[]\"\nI0529 00:54:27.850270       1 flags.go:59] FLAG: --requestheader-client-ca-file=\"\"\nI0529 00:54:27.850275       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0529 00:54:27.850281       1 flags.go:59] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0529 00:54:27.850289       1 flags.go:59] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0529 00:54:27.850294       1 flags.go:59] FLAG: --scheduler-name=\"default-scheduler\"\nI0529 00:54:27.850299       1 flags.go:59] FLAG: --secure-port=\"10259\"\nI0529 00:54:27.850303       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0529 00:54:27.850307       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0529 00:54:27.850312       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0529 00:54:27.850316       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0529 00:54:27.850320       1 flags.go:59] FLAG: --tls-cert-file=\"\"\nI0529 00:54:27.850325       1 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"\nI0529 00:54:27.850331       1 flags.go:59] FLAG: --tls-min-version=\"\"\nI0529 00:54:27.850335       1 flags.go:59] FLAG: --tls-private-key-file=\"\"\nI0529 00:54:27.850339       1 flags.go:59] FLAG: --tls-sni-cert-key=\"[]\"\nI0529 00:54:27.850345       1 flags.go:59] FLAG: --use-legacy-policy-config=\"false\"\nI0529 00:54:27.850350       1 flags.go:59] FLAG: --v=\"2\"\nI0529 00:54:27.850354       1 flags.go:59] FLAG: --version=\"false\"\nI0529 00:54:27.850366       1 flags.go:59] FLAG: --vmodule=\"\"\nI0529 00:54:27.850370       1 flags.go:59] FLAG: --write-config-to=\"\"\nI0529 00:54:28.626062       1 serving.go:331] Generated self-signed cert in-memory\nW0529 00:54:29.394791       1 authentication.go:308] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.\nW0529 00:54:29.394812       1 authentication.go:332] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.\nW0529 00:54:29.394824       1 authorization.go:176] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.\nI0529 00:54:39.405092       1 factory.go:187] Creating scheduler from algorithm provider 'DefaultProvider'\nI0529 00:54:39.414531       1 configfile.go:72] Using component config:\napiVersion: kubescheduler.config.k8s.io/v1beta1\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 100\n  contentType: application/vnd.kubernetes.protobuf\n  kubeconfig: /var/lib/kube-scheduler/kubeconfig\n  qps: 50\nenableContentionProfiling: true\nenableProfiling: true\nhealthzBindAddress: 0.0.0.0:10251\nkind: KubeSchedulerConfiguration\nleaderElection:\n  leaderElect: true\n  leaseDuration: 15s\n  renewDeadline: 10s\n  resourceLock: leases\n  resourceName: kube-scheduler\n  resourceNamespace: kube-system\n  retryPeriod: 2s\nmetricsBindAddress: 0.0.0.0:10251\nparallelism: 16\npercentageOfNodesToScore: 0\npodInitialBackoffSeconds: 1\npodMaxBackoffSeconds: 10\nprofiles:\n- pluginConfig:\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: DefaultPreemptionArgs\n      minCandidateNodesAbsolute: 100\n      minCandidateNodesPercentage: 10\n    name: DefaultPreemption\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      hardPodAffinityWeight: 1\n      kind: InterPodAffinityArgs\n    name: InterPodAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeAffinityArgs\n    name: NodeAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesFitArgs\n    name: NodeResourcesFit\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesLeastAllocatedArgs\n      resources:\n      - name: cpu\n        weight: 1\n      - name: memory\n        weight: 1\n    name: NodeResourcesLeastAllocated\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      defaultingType: System\n      kind: PodTopologySpreadArgs\n    name: PodTopologySpread\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      bindTimeoutSeconds: 600\n      kind: VolumeBindingArgs\n    name: VolumeBinding\n  plugins:\n    bind:\n      enabled:\n      - name: DefaultBinder\n        weight: 0\n    filter:\n      enabled:\n      - name: NodeUnschedulable\n        weight: 0\n      - name: NodeName\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: NodeResourcesFit\n        weight: 0\n      - name: VolumeRestrictions\n        weight: 0\n      - name: EBSLimits\n        weight: 0\n      - name: GCEPDLimits\n        weight: 0\n      - name: NodeVolumeLimits\n        weight: 0\n      - name: AzureDiskLimits\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: VolumeZone\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n    permit: {}\n    postBind: {}\n    postFilter:\n      enabled:\n      - name: DefaultPreemption\n        weight: 0\n    preBind:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    preFilter:\n      enabled:\n      - name: NodeResourcesFit\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n    preScore:\n      enabled:\n      - name: InterPodAffinity\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n    queueSort:\n      enabled:\n      - name: PrioritySort\n        weight: 0\n    reserve:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n\nI0529 00:54:39.414549       1 server.go:138] Starting Kubernetes Scheduler version v1.20.7\nW0529 00:54:39.417625       1 authorization.go:47] Authorization is disabled\nW0529 00:54:39.417636       1 authentication.go:40] Authentication is disabled\nI0529 00:54:39.417648       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251\nI0529 00:54:39.420055       1 tlsconfig.go:200] loaded serving cert [\"Generated self signed cert\"]: \"localhost@1622249668\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1622249668\" (2021-05-28 23:54:27 +0000 UTC to 2022-05-28 23:54:27 +0000 UTC (now=2021-05-29 00:54:39.419590709 +0000 UTC))\nI0529 00:54:39.420290       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1622249669\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1622249668\" (2021-05-28 23:54:28 +0000 UTC to 2022-05-28 23:54:28 +0000 UTC (now=2021-05-29 00:54:39.420279278 +0000 UTC))\nI0529 00:54:39.420312       1 secure_serving.go:197] Serving securely on [::]:10259\nI0529 00:54:39.420363       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0529 00:54:39.421612       1 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.421766       1 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.423868       1 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.424875       1 reflector.go:219] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.425154       1 reflector.go:219] Starting reflector *v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.425392       1 reflector.go:219] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.425653       1 reflector.go:219] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.425910       1 reflector.go:219] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.427851       1 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.428136       1 reflector.go:219] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:39.428384       1 reflector.go:219] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134\nI0529 00:54:59.545229       1 trace.go:205] Trace[1216180520]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.421) (total time: 20123ms):\nTrace[1216180520]: [20.123572708s] [20.123572708s] END\nE0529 00:54:59.545252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.545270       1 trace.go:205] Trace[791178011]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.424) (total time: 20120ms):\nTrace[791178011]: [20.120367015s] [20.120367015s] END\nE0529 00:54:59.545282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.545294       1 trace.go:205] Trace[765061429]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.421) (total time: 20123ms):\nTrace[765061429]: [20.123510716s] [20.123510716s] END\nE0529 00:54:59.545306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.545398       1 trace.go:205] Trace[2124392123]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.425) (total time: 20120ms):\nTrace[2124392123]: [20.120226954s] [20.120226954s] END\nE0529 00:54:59.545407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.545508       1 trace.go:205] Trace[991873170]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.423) (total time: 20121ms):\nTrace[991873170]: [20.121622754s] [20.121622754s] END\nE0529 00:54:59.545513       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.545613       1 trace.go:205] Trace[1398720430]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.425) (total time: 20119ms):\nTrace[1398720430]: [20.119948427s] [20.119948427s] END\nE0529 00:54:59.545620       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.545707       1 trace.go:205] Trace[1323495786]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.425) (total time: 20120ms):\nTrace[1323495786]: [20.120302438s] [20.120302438s] END\nE0529 00:54:59.545712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.546042       1 trace.go:205] Trace[380323365]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.428) (total time: 20117ms):\nTrace[380323365]: [20.117881761s] [20.117881761s] END\nE0529 00:54:59.546059       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.546042       1 trace.go:205] Trace[2115583502]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.427) (total time: 20118ms):\nTrace[2115583502]: [20.118166778s] [20.118166778s] END\nE0529 00:54:59.546078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0529 00:54:59.546393       1 trace.go:205] Trace[99967597]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.425) (total time: 20120ms):\nTrace[99967597]: [20.120461746s] [20.120461746s] END\nI0529 00:54:59.546393       1 trace.go:205] Trace[89167011]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (29-May-2021 00:54:39.428) (total time: 20117ms):\nTrace[89167011]: [20.117988983s] [20.117988983s] END\nE0529 00:54:59.546403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nE0529 00:54:59.546407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nE0529 00:55:03.269429       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nE0529 00:55:03.269522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE0529 00:55:03.269593       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nE0529 00:55:03.269657       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE0529 00:55:03.269715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nE0529 00:55:03.269780       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE0529 00:55:03.269859       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE0529 00:55:03.269922       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nE0529 00:55:03.269970       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE0529 00:55:03.270020       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE0529 00:55:03.271892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nI0529 00:55:06.421322       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...\nI0529 00:55:06.432788       1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler\nI0529 00:55:08.079191       1 node_tree.go:65] Added node \"ip-172-20-36-113.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0529 00:55:21.403215       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:55:21.436138       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-5c4c4\" node=\"ip-172-20-36-113.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0529 00:55:21.452372       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/dns-controller-5f98b58844-9jznw\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.\"\nI0529 00:55:21.480552       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:55:21.480675       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cilium-operator-7fd7d56f47-kz2bx\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.\"\nI0529 00:55:21.507457       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:55:21.530843       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/dns-controller-5f98b58844-9jznw\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.\"\nI0529 00:55:21.530967       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cilium-operator-7fd7d56f47-kz2bx\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.\"\nI0529 00:55:21.531080       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:55:38.211005       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:55:38.211177       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/dns-controller-5f98b58844-9jznw\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.\"\nI0529 00:55:38.211287       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cilium-operator-7fd7d56f47-kz2bx\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.\"\nI0529 00:55:38.211402       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:55:42.444616       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:55:42.444769       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/dns-controller-5f98b58844-9jznw\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.\"\nI0529 00:55:42.444897       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/cilium-operator-7fd7d56f47-kz2bx\" err=\"0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.\"\nI0529 00:55:42.451063       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:56:09.887365       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:56:09.887693       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:56:09.894977       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-5f98b58844-9jznw\" node=\"ip-172-20-36-113.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0529 00:56:09.905762       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-operator-7fd7d56f47-kz2bx\" node=\"ip-172-20-36-113.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0529 00:56:09.940116       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-8k5nc\" node=\"ip-172-20-36-113.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0529 00:56:20.451151       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:56:20.451318       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0529 00:56:30.521736       1 node_tree.go:65] Added node \"ip-172-20-33-144.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0529 00:56:30.521956       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0529 00:56:30.534412       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0529 00:56:30.555581       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-hjh65\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=2 feasibleNodes=1\nI0529 00:56:32.758700       1 node_tree.go:65] Added node \"ip-172-20-52-235.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0529 00:56:32.784628       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-qh277\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI0529 00:56:35.896861       1 node_tree.go:65] Added node \"ip-172-20-47-14.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0529 00:56:35.930051       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-hcng5\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=4 feasibleNodes=1\nI0529 00:56:36.049731       1 node_tree.go:65] Added node \"ip-172-20-58-248.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0529 00:56:36.072524       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-vhmhc\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:56:41.453082       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0529 00:56:41.460563       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0529 00:56:51.551228       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6t684\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:56:52.517388       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-8f5559c9b-pxvkg\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:57:09.706183       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-8f5559c9b-cxld4\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:46.649529       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3796/nfs-server\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:46.953632       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8421/downwardapi-volume-4b0f2f25-4146-456f-896f-ff5009527dc9\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:46.962592       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-2586/test-webserver-d67b6065-234f-4be7-9eee-bc70c246583b\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:47.430352       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-7721/terminate-cmd-rpac0ba2fa5-d323-4750-aa99-0fb65125e386\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:47.537136       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-3072/pod-hostip-a65c722a-834e-4327-8dfd-3a9865e5fa07\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:47.706137       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-9727/pod-secrets-d7df8268-532b-42e6-9af7-18fc8c547433\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:48.621305       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-3130/sample-webhook-deployment-6bd9446d55-s7np4\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:48.975652       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2571/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:49.000760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2931/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-q9kqs\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:49.101457       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2571/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:49.263953       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2571/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:49.322595       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2507/pod-subpath-test-inlinevolume-q26g\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:49.429693       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2571/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:49.497242       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-7648/pod-subpath-test-configmap-wszn\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:49.557193       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7880/pod-subpath-test-inlinevolume-qsm6\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:50.722162       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1503/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-fkvwk\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:50.948634       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-7936/sample-webhook-deployment-6bd9446d55-wm72g\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:50.968463       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5988/pod-0cadc1f0-1ef3-453d-b6b3-e8365ccaa6cb\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:52.360990       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-7718/test-pod-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:52.371083       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6606/external-provisioner-w6tqb\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:52.493762       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-7630/pod-configmaps-6e9cb973-2b12-476d-9ec3-51e5d5d40af5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:52.519859       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-7718/test-pod-2\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:52.682478       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-7718/test-pod-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:52.828903       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6951-7649/csi-hostpath-attacher-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:53.316143       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6951-7649/csi-hostpathplugin-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:53.465259       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7961-55/csi-hostpath-attacher-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:53.520641       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9190/pvc-volume-tester-writer-t84gz\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 00:59:53.699271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6951-7649/csi-hostpath-provisioner-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:53.968361       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7961-55/csi-hostpathplugin-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:54.013812       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6951-7649/csi-hostpath-resizer-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:54.296991       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7961-55/csi-hostpath-provisioner-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:54.359556       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6951-7649/csi-hostpath-snapshotter-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:54.634408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7961-55/csi-hostpath-resizer-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:54.713722       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-642/pod-subpath-test-inlinevolume-kfq8\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:54.835957       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9633-2601/csi-mockplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 00:59:54.971206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7961-55/csi-hostpath-snapshotter-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:01.250904       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-1163/metadata-volume-034b259a-4cf2-4a7d-8f8a-01bda0b14cf2\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:01.336848       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1503/pod-b79c813f-4e4d-4aab-9c44-c79216fae5da\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:01.687984       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1831/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-p5k9c\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:01.807860       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-3267/downward-api-48c88e42-20d3-481b-b542-5150d2322619\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:02.807893       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5770/pod-7428b5bb-7d24-4ca9-afd7-91aa763b3576\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:03.044712       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3796/pvc-tester-5d2j9\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:03.521609       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8745/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-vgtgt\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:03.962271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-5709/sample-webhook-deployment-6bd9446d55-d55lj\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:05.569369       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1140/pod-subpath-test-inlinevolume-4b2b\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:08.040533       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-t8qr9\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:08.143554       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2931/pod-subpath-test-preprovisionedpv-knb5\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:09.358111       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7848-1864/csi-mockplugin-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:09.510312       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7848-1864/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:09.572924       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-4850/pod-configmaps-229d82e9-f687-43e6-9e76-523b5d2c9eba\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:09.686715       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7848-1864/csi-mockplugin-resizer-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:11.516589       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-8341-5773/csi-hostpath-attacher-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:12.002051       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-8341-5773/csi-hostpathplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:12.341702       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1503/pod-7ccb6af1-88d7-4619-b207-3df05c24a0f3\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:12.393223       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-8341-5773/csi-hostpath-provisioner-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:12.460466       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8629/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-fmk28\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:12.680217       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-8341-5773/csi-hostpath-resizer-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:12.950461       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2077/pod-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:13.022109       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-8341-5773/csi-hostpath-snapshotter-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:13.161341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-2077/pod-1\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:13.453402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-8341/inline-volume-tester-rq78j\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:13.817423       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3740/nfs-server\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:14.799171       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9190/pvc-volume-tester-reader-pq2hv\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:15.299048       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3796/pvc-tester-ng9vj\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:15.438721       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-7936/to-be-attached-pod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:16.118209       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6606/pod-subpath-test-dynamicpv-tx7d\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:17.204760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-4373/var-expansion-7433bdbd-2ccf-422c-870c-a49009c9dad2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:17.349246       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5766/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-cqxpp\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:17.661550       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-7721/terminate-cmd-rpof62611b11-9c76-4a40-a900-86a73e562b88\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.085264       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-tq7kx\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.094604       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-jh9cg\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.094693       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-jcbvr\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.128402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-6258r\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.138587       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-r7qmt\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.138808       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-nlbzr\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.138896       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-rptjb\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.138976       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-jd9hj\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.151689       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-wjj7q\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.151780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-7bw4q\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.151865       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-h8kvb\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.151928       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-qjgdd\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.162236       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-dwmfl\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.162319       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-rn7z7\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.162373       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-htsz5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.186583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-c8nfb\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.186872       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-fkvrl\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.186949       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-xjp28\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.187006       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-vb48m\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.198393       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-29rcj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.198404       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-tzpkn\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.198525       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-6vkgt\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.198636       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-9qwtt\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.198651       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-ggl6b\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.198710       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-74977\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.203097       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-k8pxv\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.213180       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-s4wdp\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.213250       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-jlxk8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.216667       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-vs84j\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.258913       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-qkzk5\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.305402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-jcq26\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.405153       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-fhcc8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.455071       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-9qdjl\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.506027       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-c5mkg\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.558102       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-rprxj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.608284       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-z5glm\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.662088       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-b4pd2\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.715110       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-bd5p7\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.759489       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-w8wz4\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:18.806749       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-634/cleanup40-82c9c23f-625c-427a-ad30-bce17b88cc6d-fbzrg\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:19.285780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2571/test-container-pod\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:19.941969       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-2637/busybox-3544d9c1-fabf-4cca-b233-6b04484fdcd2\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:20.282640       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-t8qr9\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:21.131851       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2123/exec-volume-test-dynamicpv-ssfj\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:21.419941       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-1608/pod-submit-remove-142006e6-f303-4ecc-8cc3-06155f614fb8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:21.505604       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6951/pod-subpath-test-dynamicpv-95t7\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:22.284592       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1313-4190/csi-hostpath-attacher-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:22.534431       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1831/pod-subpath-test-preprovisionedpv-kpht\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:22.777428       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1313-4190/csi-hostpathplugin-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:23.098092       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1313-4190/csi-hostpath-provisioner-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:23.141866       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8745/pod-subpath-test-preprovisionedpv-jvrz\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:23.307122       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2218/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-6q5vn\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:23.355072       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8629/exec-volume-test-preprovisionedpv-gqpj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:23.425617       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1313-4190/csi-hostpath-resizer-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:23.763928       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1313-4190/csi-hostpath-snapshotter-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:23.870651       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-5289/security-context-6fda962d-85e4-42ca-be3b-edefb46e0037\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:24.200983       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7961/pod-subpath-test-dynamicpv-2blg\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:25.247638       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878/external-provisioner-l8647\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:30.401973       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-7721/terminate-cmd-rpn7a0ad0a8-665f-4186-ab26-9329d9e759b0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:32.220521       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-785/pod-subpath-test-inlinevolume-fjhm\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:32.851267       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1650/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-qdw8x\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:35.629558       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8240/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-8rv9m\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:36.703009       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878/pod-subpath-test-dynamicpv-9fxv\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:37.571073       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5766/pod-subpath-test-preprovisionedpv-s976\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:38.183609       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3740/pvc-tester-mp7tr\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:38.193547       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7848/pvc-volume-tester-9jnqp\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:39.666949       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-5909/simpletest.deployment-7f7555f8bc-tcmzj\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:39.667209       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-5909/simpletest.deployment-7f7555f8bc-r8ssp\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:39.890214       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7751/external-provisioner-xtqxb\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:40.658776       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1650/pod-bb16b07b-990c-471b-8772-e74b72abae45\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:41.688507       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-4940/pod-projected-secrets-aa184f35-f403-4b41-a82b-a3faf0f1bca1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:43.530402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"proxy-5292/proxy-service-l6k4n-gqw5k\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:43.611684       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-4980/pod-service-account-78ac996f-75eb-402e-9c69-019c79f0d1be\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:47.638084       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1650/pod-9810b37e-c79c-4c37-89c5-dbeb3a9c85d0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:48.380572       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-2075/termination-message-container0196453d-2e41-43a4-a935-4293f5051a77\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:48.655638       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"volume-1313/hostpath-injector\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nI0529 01:00:48.662267       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"volume-1313/hostpath-injector\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity.\"\nI0529 01:00:49.346037       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7751/pod-subpath-test-dynamicpv-t6hp\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:51.529426       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1313/hostpath-injector\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:53.217256       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8240/pod-subpath-test-preprovisionedpv-mrfg\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:54.677937       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-2716/pod-configmaps-3978477b-41b6-410d-949d-dba22e216125\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:55.095357       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5472/external-provisioner-jxcbr\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:55.371933       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-163/pod-subpath-test-inlinevolume-fwdq\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:00:57.231857       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1926/pod-subpath-test-dynamicpv-xsjw\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:00:59.106510       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-1486/pod-projected-secrets-e1da1912-c96e-4d44-a197-e32a9c33d000\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:00.299592       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4236/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-f5xxc\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:00.338848       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7264/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-p6j52\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:00.881587       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-5855/downwardapi-volume-3f5de46f-e23a-48f4-99f3-9d35ac2f8614\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:02.224475       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-9894/pod-configmaps-23da9809-9eab-4003-a207-cc9231750f63\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:03.147670       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1080/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-jt7kj\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:03.315241       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-379/bin-falseb6ad273b-8218-454d-81c9-df48a7366293\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:03.843939       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7848/pvc-volume-tester-bv62x\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:03.873954       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-1608/pfpod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:03.973209       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-3345/pod-subpath-test-downwardapi-7f2r\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:04.698381       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5472/pod-subpath-test-dynamicpv-h2bt\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:06.963664       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9633/pvc-volume-tester-kc4jx\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:07.348210       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7136/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-z6j9k\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:07.985277       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1080/pod-a8877733-a937-400c-a539-0aa4bfdf8011\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:09.065982       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7264/pod-subpath-test-preprovisionedpv-nn9h\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:10.155522       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-242pn\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.167151       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-nh7q6\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.175688       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-96dnk\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.175945       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-xjwmp\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.176034       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-f4ct5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.176088       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-6fh64\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.185289       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-v6dm8\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.222191       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-7sv8k\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.222430       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-svvgp\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:10.231036       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-vdktv\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:12.024104       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3376-22/csi-hostpath-attacher-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:12.534018       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3376-22/csi-hostpathplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:12.863925       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3376-22/csi-hostpath-provisioner-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:13.189460       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3376-22/csi-hostpath-resizer-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:13.528684       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3376-22/csi-hostpath-snapshotter-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:13.839094       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3376/inline-volume-tester-864zh\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:14.817784       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1188/hostexec\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:18.141077       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5772/pod-e4a8e92a-1e07-491e-8691-b77e369276ce\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:18.440601       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svc-latency-7868/svc-latency-rc-x2sbj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:20.494215       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-9038/explicit-root-uid\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:20.980890       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3221/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:21.142176       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3221/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:21.295534       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3221/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:21.462341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3221/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:22.181501       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-4818/dns-test-3e4c6b20-4521-49c8-9adf-8c143a92d3c4\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:22.406464       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4236/pod-subpath-test-preprovisionedpv-9gkw\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:22.742109       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7136/pod-subpath-test-preprovisionedpv-w8kq\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:23.776031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-4175/sample-webhook-deployment-6bd9446d55-vrxcz\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:24.405153       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-7663/dns-test-dd21fbaa-6764-4b68-81af-481d2beef423\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:25.054929       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5003/httpd\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:25.562465       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-124/pod-configmaps-6f07004b-e4f8-4aad-8299-1ac24254a70a\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:26.553075       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7852-5486/csi-mockplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:26.699313       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7852-5486/csi-mockplugin-attacher-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:26.876033       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7852-5486/csi-mockplugin-resizer-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:28.285119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-442/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-dnskd\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:29.467775       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-f99pk\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:29.484195       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-9jsn6\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:29.484437       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-9knqq\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:29.544993       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-zqw95\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:29.553557       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-dpmkx\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:30.405200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8765-4912/csi-hostpath-attacher-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:30.890618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8765-4912/csi-hostpathplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:31.206599       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8765-4912/csi-hostpath-provisioner-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:31.255468       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-kg5jm\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.264029       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-nqs52\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.273148       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-kk6gk\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.273228       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-x5fgc\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.301416       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-cvc87\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.321183       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-6b2q9\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.321262       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-q492z\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.334380       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-8vkps\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.334467       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-k7zhq\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.334530       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-d78hd\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.334585       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-df9wp\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.334641       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-zjvmj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.334898       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-drlx7\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.334965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-4gc8j\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.353666       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-55dnp\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.353740       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-9dnhf\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.353817       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-trlpx\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.353898       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-dd94f59b7-md82m\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.353957       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-kzc25\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.380290       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-983/webserver-deployment-795d758f88-x4jvc\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:31.555868       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8765-4912/csi-hostpath-resizer-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:31.738370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1313/hostpath-client\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:31.895470       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8765-4912/csi-hostpath-snapshotter-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:33.454697       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7852/pvc-volume-tester-2787k\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:34.505436       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7106/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-frjg7\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:35.508958       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-1612/pod-e76a56b8-84fe-4c36-b464-e5f00ee8e020\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:35.524146       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4900/external-provisioner-2mdfs\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:35.830363       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7946/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:35.993992       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7946/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:36.156252       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7946/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:36.337315       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7946/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:37.566342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7323/aws-injector\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:37.804566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-7771/pod-aba25610-53c5-42b5-ac84-c9e72662881d\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:40.635630       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-3419/var-expansion-b4603ba2-4c7f-4257-a2d7-f199e8762be4\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:42.164113       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6850/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:42.324872       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6850/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:42.486490       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6850/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:42.646703       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6850/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:45.223583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3221/test-container-pod\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:45.894439       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4302/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-prxmn\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:46.467608       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5003/success\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:47.216709       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"events-2505/send-events-4ffca5ce-b231-44ea-bff2-bf1c69d6dab0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:47.437478       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5308/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-558s6\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:49.279291       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5003/failure-1\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:50.666787       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"mount-propagation-1425/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-7zzbx\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:51.989285       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5308/pod-024c7b34-d012-454b-8d23-5bc073a78adf\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:52.063297       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7106/local-injector\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:52.510324       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5003/failure-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:53.728322       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4302/local-injector\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:54.109532       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-442/local-injector\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:54.830479       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5317-3841/csi-hostpath-attacher-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:55.013516       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1508-9260/csi-mockplugin-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:55.294277       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5317-3841/csi-hostpathplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:55.321056       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1508-9260/csi-mockplugin-attacher-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:55.589567       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5317-3841/csi-hostpath-provisioner-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:55.685159       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-4818/dns-test-6affc6b9-6bd1-431a-bd62-4629ea20e8ef\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:01:55.922249       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4276/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-gjplk\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:55.965387       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5317-3841/csi-hostpath-resizer-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:56.293515       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5317-3841/csi-hostpath-snapshotter-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:01:56.957685       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5308/pod-d5a469f9-6108-4025-ba20-4ffc21190808\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:01.102045       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1508/pvc-volume-tester-zggx9\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:02.147747       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7946/test-container-pod\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:02.313080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7946/host-test-container-pod\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:02.339085       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7323/aws-client\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:02.951242       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-6837/nfs-server\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:03.894364       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5317/hostpath-injector\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:04.559322       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6850/test-container-pod\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:04.647014       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-4900/pvc-volume-tester-writer-wm6lx\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0529 01:02:04.653547       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-4900/pvc-volume-tester-writer-wm6lx\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0529 01:02:04.719251       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6850/host-test-container-pod\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:05.373879       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8765/pod-subpath-test-dynamicpv-6trj\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:05.576038       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1508/inline-volume-rvvhl\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:05.702944       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-247/test-pod-751e8762-3c65-4585-b280-29e3e90ca408\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:07.547382       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4900/pvc-volume-tester-writer-wm6lx\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:08.816867       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-9486/concurrent-1622250120-hhrpm\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:08.841606       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-247/test-pod-751e8762-3c65-4585-b280-29e3e90ca408\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:08.862865       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-6837/pvc-tester-wwd74\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:09.148389       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4276/pod-subpath-test-preprovisionedpv-qxj9\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:09.466270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-1-lgtz8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:09.477310       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-1-czl69\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:09.477544       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-1-x8wqx\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:09.502481       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7106/local-client\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:11.964364       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-247/test-pod-751e8762-3c65-4585-b280-29e3e90ca408\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:12.067781       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-6446/termination-message-container2b95d3df-3fb6-4d57-adce-5bbeaad2ac70\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:12.439569       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4900/pvc-volume-tester-reader-vbrw7\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:13.348996       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6838/pod-subpath-test-dynamicpv-rld9\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:14.008916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-6837/pvc-tester-726vb\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:14.556743       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-442/local-client\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:16.141733       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-2-ltwx7\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:16.152122       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-2-45dgj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:16.152207       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-2-f4nqz\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:16.736481       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6964/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-nrtlc\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:17.724593       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-7771/pod-4c344b1a-9eae-4cba-b599-a2d14a8ed8d0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:17.822981       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4302/local-client\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:18.984008       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4276/pod-subpath-test-preprovisionedpv-qxj9\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:19.127064       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-6837/pvc-tester-lflk9\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:19.395397       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-247/test-pod-751e8762-3c65-4585-b280-29e3e90ca408\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:19.660433       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-host-exec-pod\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:22.139876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-exec-pod-wvdzw\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:23.567040       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6964/pod-f2484a0c-9709-49f5-a851-70853a042b83\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:25.863159       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-6004/startup-6e4aba13-659a-4863-af3d-1190b6ea96fd\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:27.594075       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8884/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-fr6j4\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:28.535762       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6964/pod-4d292eb5-cd14-4b58-8524-ca7a3c4ce6b2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:28.635878       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-4818/dns-test-55628ea7-3111-4d12-9132-4dc44e30f05b\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:28.816657       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-host-exec-pod\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:29.798932       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-1184/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-ls2x5\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:31.294266       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-exec-pod-4gpnp\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:32.085264       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-jddsw\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:32.099887       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-rsvs6\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:32.104151       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-5qmxw\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:32.132077       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-lj7hd\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:32.132489       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-v7x9j\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:32.132560       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-x87hg\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:33.450121       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-3943-9422/csi-hostpath-attacher-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:33.957695       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-3943-9422/csi-hostpathplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:34.327486       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-3943-9422/csi-hostpath-provisioner-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:34.484407       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1999/pod-subpath-test-dynamicpv-zcwk\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:34.615479       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-3943-9422/csi-hostpath-resizer-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:34.949545       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-3943-9422/csi-hostpath-snapshotter-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:36.987707       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6fff65db-thvz5\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:36.999330       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6fff65db-lk5l5\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:37.039894       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6fff65db-v4fwm\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:37.328975       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"sysctl-1780/sysctl-e70a5955-1fe2-40b6-84b5-335532cff126\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:37.651271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-4zsp8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:37.720941       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-684ff849cd-w9kn4\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:37.739019       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-684ff849cd-jtmsg\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:37.744304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-684ff849cd-k4kqh\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:38.226997       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-3943/pod-d78d842c-3f5d-4aa1-8edb-9291d8e1aefd\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:38.778119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-1184/pod-92232c5c-9551-4b3d-85d9-83e31e6aa7f5\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:38.783618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8884/pod-b5564c75-d05d-4f18-a5fe-ea9b6bdd13d6\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:38.979741       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3276/aws-injector\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:39.117802       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-8gwp8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:39.176760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-684ff849cd-4llc8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:39.391769       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-684ff849cd-9tqqc\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:40.413983       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-684ff849cd-b9l5s\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:41.114554       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-9439/liveness-b1e40942-61f3-4dfc-983d-cd264bb08514\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:41.578671       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-1184/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-mmlrr\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:41.869149       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-jjbvf\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:41.948627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-j2z8h\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:41.948913       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-82g82\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:42.218954       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5317/hostpath-client\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:42.950463       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-3273/pod-secrets-8bdb9188-c0b6-41e1-a991-e8638bea1a98\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:44.294697       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8002-2487/csi-hostpath-attacher-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:44.425696       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-hs49x\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:44.780114       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8002-2487/csi-hostpathplugin-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:45.119422       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8002-2487/csi-hostpath-provisioner-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:45.151075       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-3056/pod-ephm-test-projected-m4g7\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:45.452303       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-684ff849cd-vkxx9\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:45.500509       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8002-2487/csi-hostpath-resizer-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:45.610200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-xhg4m\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:45.814580       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dd94f59b7-f4tqt\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:45.835810       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8002-2487/csi-hostpath-snapshotter-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:45.853167       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-1474/agnhost-primary-4xqmw\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:46.864940       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-1474/agnhost-primary-qxrjf\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:47.614899       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8884/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-b2gmz\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:47.748300       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-fr9mv\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:47.821049       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8080/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-85mtj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:48.146233       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-6x8gl\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:48.501144       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5523/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-x7mx6\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:48.648223       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8510/pod1\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:48.750031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-jflnw\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:49.081440       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8002/pod-bbb7fad4-0ab4-446f-bf01-038c34c0c40f\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:02:49.139423       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-h97vd\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:49.504202       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-lct4w\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:49.515042       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-gcf5h\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:49.515134       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-h45q2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:49.553313       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-8jg5l\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:50.802205       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-256dd\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:52.868619       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-xhm9h\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:55.634048       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8510/pod2\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:55.954144       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-m7np6\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:56.018799       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-7117/downward-api-458662a5-e089-404e-b612-b0c738953c15\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:56.415608       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-down-host-exec-pod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:56.818208       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5003/failure-3\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:56.953632       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-8hq59\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:57.585247       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-vxc4l\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:57.741322       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-ndbpl\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:57.913120       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-dv7tc\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:58.084273       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-67c4n\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:58.116007       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-5138/client-containers-6f7b7eaa-7adf-4846-8ad3-d043addca629\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:58.275176       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-zmzc7\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:02:58.433358       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-dbd66974-g25hq\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:00.738156       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-5117/busybox-1af7bf53-57d8-4708-8aaf-0cc4dcd8f070\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:01.101617       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:03.452171       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-1\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:04.229049       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1999/pod-subpath-test-dynamicpv-zcwk\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:04.958322       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-554/exec-volume-test-inlinevolume-ztll\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:06.840256       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-8002/pod-074a5e5f-7e56-483e-a414-175f523c7c3e\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:07.432702       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8080/pod-271bb0a8-2d1c-4919-a0e6-0603768af96d\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:07.446008       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3276/aws-client\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:07.865776       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5523/pod-subpath-test-preprovisionedpv-ztpv\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:08.155987       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-wk4lh\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:08.176331       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-4ng98\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:08.237622       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-x5dn7\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:08.361234       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-4656/ss-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:08.707189       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-host-exec-pod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:08.913246       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-9486/concurrent-1622250180-qq7xf\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:09.332950       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-2\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:09.371907       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6896/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-s6f8t\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:10.127362       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-4c6zm\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:10.222401       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8080/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-knvn2\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:10.820098       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-rx7b9\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:12.191454       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-6zsdb\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:12.512637       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-996tg\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:13.198050       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-exec-pod-rpgcq\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:14.382852       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-8d4hf\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:14.563387       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-pzpvp\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:14.741299       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-bzccj\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:14.912500       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-wwf9w\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:15.105845       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-tjkch\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:15.313268       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-knkbv\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:15.822668       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-plkcq\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:15.824551       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-lffvz\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:15.880503       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-gdss4\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:16.132516       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-65wwc\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:16.318092       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-gc5rk\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:16.545175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-h7cq5\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:16.558117       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-9831/downwardapi-volume-5c5de944-fefd-4467-8d47-c81827008de9\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:16.703609       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-76b9b997b6-xjxtk\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:16.869398       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-pgnj2\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:17.049423       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-vhsxc\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:18.990043       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-pv6jp\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:19.276236       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-mjwfr\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:19.310421       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6896/pod-7f9b131d-9881-4494-9ce5-9f275ec5615e\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:19.342565       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-4656/ss-1\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:19.384001       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5983/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-fmkcs\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:19.953276       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-gd4cr\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:21.608748       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-w5srv\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:22.067588       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:22.160438       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-3-xbvxt\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:22.160686       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-3-sw9vs\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:22.164997       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/up-down-3-zmct5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:22.815601       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4195/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-r2tbd\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:23.036127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-9740/busybox-privileged-true-05e0faca-43c8-40ac-86d9-74a7f3b20ee1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:26.542876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-t4xdw\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:26.551054       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-847f66569d-9dw2x\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:26.902782       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6d5c44cb-ljrx8\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:26.912271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6d5c44cb-sdq9n\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:26.967750       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6d5c44cb-xtvdx\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:27.510176       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-8797/pfpod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:27.999547       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-3608/downwardapi-volume-162695de-7685-437d-9608-d24b82d70a94\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:28.060166       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:28.437054       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6896/pod-3b9d8a38-94c6-4339-92bd-13e9f7f579c5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:28.663362       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-host-exec-pod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:28.957400       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6d5c44cb-hnbmh\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:29.327650       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-7858/dns-test-c6aea921-8bea-4f02-85e6-28b40ba91581\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:30.759322       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4195/pod-358bb3bd-6d42-4f8a-965c-cf4bf6f8821f\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:30.916070       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6d5c44cb-5t89q\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:30.991136       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:31.012598       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-4049/pod-init-696b5f0d-be27-42f8-b173-58c33148f20d\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:32.198367       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6d5c44cb-t5qtw\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:32.482459       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4228/webserver-5d6d5c44cb-gppd4\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:34.198022       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8624/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-jp6c8\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:34.262740       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7819/metadata-volume-c9fc799c-6813-4d49-beda-828d3f7cc357\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:35.144760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-exec-pod-9pqww\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:35.476175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"aggregator-9133/sample-apiserver-deployment-67dc674868-2xzpx\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:37.834106       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-6032/projected-volume-38912b57-236e-4bc0-bfb2-476ca21cac2a\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:38.394141       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9997-915/csi-mockplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:38.554800       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9997-915/csi-mockplugin-attacher-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:38.824711       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2454/aws-injector\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:38.980360       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5983/pod-subpath-test-preprovisionedpv-pxg8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:40.004623       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-8808/pod-secrets-2ae4f39d-ab68-48ec-bd6e-c2082ac7b40a\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:42.163215       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8163/httpd\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:42.337967       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-host-exec-pod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:42.352261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4675/pod-subpath-test-inlinevolume-hc2x\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:42.519212       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/service-headless-lvmnw\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:42.536801       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/service-headless-wzkl2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:42.537090       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/service-headless-jsrnf\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:43.266024       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-1271-7258/csi-hostpath-attacher-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:43.771436       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-1271-7258/csi-hostpathplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:43.900225       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:44.091753       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-1271-7258/csi-hostpath-provisioner-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:44.397080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-4386/explicit-nonroot-uid\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:44.569353       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-1271-7258/csi-hostpath-resizer-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:44.829445       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-1271-7258/csi-hostpath-snapshotter-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:45.059555       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-663/exec-volume-test-dynamicpv-mn7b\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:45.179235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-1271/inline-volume-tester-dzgg9\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:47.142080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6298-4374/csi-mockplugin-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:47.458804       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6298-4374/csi-mockplugin-attacher-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:47.989640       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8703/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-5tdqj\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:48.811237       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7587/verify-service-up-exec-pod-kqnkc\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:49.175678       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-5558/fail-once-local-b7nl6\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:49.185853       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-5558/fail-once-local-6ntxx\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:49.213697       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/service-headless-toggled-zfkpf\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:49.234192       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/service-headless-toggled-mxm8n\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:49.234438       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/service-headless-toggled-5qq8l\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:49.831022       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9997/pvc-volume-tester-4p88z\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:52.835902       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8703/pod-d2028c3c-d173-4aa0-b30c-908706429307\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:53.347283       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-3085/pod-projected-configmaps-2b7b6567-8422-4441-8c0d-449c77e3c6db\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:53.910246       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-1271/inline-volume-tester2-cnhgw\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:53.959878       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-5558/fail-once-local-94jtj\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:54.124307       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8624/pod-subpath-test-preprovisionedpv-khr4\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:03:54.449613       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-5558/fail-once-local-b9pwl\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:55.961108       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/verify-service-up-host-exec-pod\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:56.034407       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-1\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:56.924547       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-5749/pod-ready\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:03:57.848270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-5807/pod-ephm-test-projected-gfjc\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:00.657445       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/verify-service-up-exec-pod-47k9f\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:00.983126       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3651/pod-handle-http-request\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:01.532168       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6298/pvc-volume-tester-7dp5l\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:02.089213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5950/pod-9f805509-bd8d-47be-a42a-e7fed15b6e96\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:02.509871       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-3746/labelsupdatea416de19-486d-4ca3-a27a-fd6b73fcf282\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:03.503021       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-7320/security-context-c972ba99-bd5c-40f2-bfd4-1945b65c6e4a\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:03.646538       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3651/pod-with-poststart-exec-hook\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:03.894472       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6974/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-2g7f6\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:04.791162       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8176-1146/csi-hostpath-attacher-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:05.001078       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-1305/test-webserver-fd6e4e3a-3a13-40a4-b070-e38e71ccb982\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:05.308025       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8176-1146/csi-hostpathplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:05.634326       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8176-1146/csi-hostpath-provisioner-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:05.959147       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8176-1146/csi-hostpath-resizer-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:06.022803       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6298/inline-volume-5cxt9\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:06.287647       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8176-1146/csi-hostpath-snapshotter-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:06.434883       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4232/frontend-7659f66489-ln4ft\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:06.447945       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4232/frontend-7659f66489-fg69d\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:06.479170       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4232/frontend-7659f66489-b5vhw\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:06.519676       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-6428/pod-configmaps-e227bc12-563b-41ce-ab63-f09172e0bb85\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:06.818186       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8728/ss2-2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:07.282145       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4232/agnhost-primary-56857545d9-fnn5n\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:07.576921       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/verify-service-down-host-exec-pod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:08.166394       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4232/agnhost-replica-55fd9c5577-8plqk\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:08.176271       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4232/agnhost-replica-55fd9c5577-jz7m2\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:08.280627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8333/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:08.442550       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8333/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:08.607156       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8333/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:08.617356       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-8524/pod-a9d27e73-b19f-4c7f-963a-894ae060b5ae\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:08.771065       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8333/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:09.213145       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-4857/forbid-1622250240-mhdt7\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:09.559515       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8176/pod-0e44095b-9b3f-4052-9f2f-e4813cbbce5a\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:10.271611       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6059/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:10.424623       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6059/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:10.582399       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6059/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:10.746709       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6059/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:10.878425       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9933/slow-terminating-unready-pod-7dlkr\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:13.749461       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2454/aws-client\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:14.357158       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-8176/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-ttcrx\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:15.119846       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7148/annotationupdatec10a6aa5-abda-41af-9066-c9574f9b0f57\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:15.515401       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-6461/pod-subpath-test-secret-654h\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:15.544323       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-4560/pfpod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:15.689754       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-487-6016/csi-mockplugin-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:16.019646       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-487-6016/csi-mockplugin-resizer-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:18.151119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9933/execpod-lffvn\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:18.189235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/verify-service-down-host-exec-pod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:22.802279       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-487/pvc-volume-tester-62556\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:23.780688       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6974/exec-volume-test-preprovisionedpv-6jfh\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:24.206489       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6122/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-sllbm\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:25.266286       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6296/agnhost-primary-2dzpw\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:26.839115       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/verify-service-up-host-exec-pod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:26.941041       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4317/test-cleanup-controller-xkxxb\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:28.714998       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6122/pod-46c49e57-4de8-42c2-bd72-9cb23fbd7c37\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:29.319210       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/verify-service-up-exec-pod-5kd2q\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:29.923055       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4317/test-cleanup-deployment-685c4f8568-kf4b4\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:30.492353       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6059/test-container-pod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:30.721792       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8333/test-container-pod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:30.884256       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8333/host-test-container-pod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:33.161157       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"tables-9819/pod-1\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:34.098443       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-434/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-lkm59\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:34.422150       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-2120/pod-handle-http-request\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:34.465895       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-487/pvc-volume-tester-t2jvs\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:34.966308       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8296/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:35.129645       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8296/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:35.292381       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8296/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:35.455337       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8296/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:36.171897       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8404/verify-service-down-host-exec-pod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:36.383197       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-3160/termination-message-containercea53d2e-557d-47e3-9941-e9c7d64606bf\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:36.793567       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3824/pod-subpath-test-inlinevolume-8mfs\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:37.083667       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-2120/pod-with-poststart-http-hook\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:37.432962       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5003/failure-4\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:37.996111       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1696/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-hcrv9\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:40.370239       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-3744/busybox-user-0-97454a86-dc96-4a57-8391-e40b39494884\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:40.867432       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4487/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-bf96n\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:41.199771       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-1507/pod-0f603baf-fe1a-4579-b956-6808f497f705\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:42.915994       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-3662/sample-webhook-deployment-6bd9446d55-5vl5r\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:43.649236       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-4817/nfs-server\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:44.241285       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-3947/security-context-9c241d02-34d9-4663-9e5c-8cc4cbd22c8d\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:44.495183       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1696/pod-17d720e6-f4f5-4c26-8a6f-b917db807565\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:45.309144       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-252/pod-subpath-test-inlinevolume-d7jd\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:50.033353       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2426-1263/csi-mockplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:52.109257       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9967/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-lztlg\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:53.734566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-434/pod-subpath-test-preprovisionedpv-f5lw\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:53.790997       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2426/pvc-volume-tester-twn7l\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:54.137756       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4487/pod-subpath-test-preprovisionedpv-ld4s\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:54.892505       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-t679n\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.901118       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-krlf9\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.910882       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-xwfd5\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.922864       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-6zwkz\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.923115       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-fc56l\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.928237       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-xfx96\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.928324       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-4c4tr\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.945096       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-8xpsz\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.945217       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-9dhs2\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:54.949480       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-7065/simpletest-rc-to-be-deleted-k2tb8\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:55.299815       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7455/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-b9bnp\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:57.759441       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7361/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-fz5s2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:04:57.823899       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-435/pod-011b5bdc-6194-4ca5-8f97-814a3d1598c9\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:58.909144       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-3351/busybox-6132174d-0768-45d7-87d9-be10cff456a6\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:59.073269       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8296/test-container-pod\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:04:59.234564       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8296/host-test-container-pod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:01.237950       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-7124/rs-vbhvw\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:01.277890       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-7124/rs-ml8k8\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:01.278130       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-7124/rs-xgjm2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:03.749735       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-377/pod-subpath-test-inlinevolume-sd8l\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:04.537948       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7361/pod-953d1888-5358-4ee5-b26c-52def8b8b6fe\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:05.976025       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7339/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-4wjfm\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:07.462226       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7186/test-orphan-deployment-dd94f59b7-hwsxz\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:07.954397       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9967/pod-998a1ce5-b5fc-4bba-93dd-28f4d53a2bd0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:08.587182       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7455/pod-subpath-test-preprovisionedpv-rr42\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:08.608227       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-7124/rs-lj6ls\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:09.028333       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-4817/pvc-tester-hsk9m\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:09.319860       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-5404/replace-1622250300-df8wd\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:10.623504       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-3875/pod-993ae388-b44a-4d76-9089-d540a192643b\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:10.745760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9967/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-x6zfk\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:12.355094       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-7124/rs-kklxw\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:13.920132       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-7324/implicit-nonroot-uid\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:13.960783       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7565/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-75fmx\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:14.550704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-856/security-context-1cb1d873-6af0-4a24-8e46-7fa76a04737e\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:16.296183       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-8622/pod-qos-class-2f7b9e2c-fbd5-4311-a40a-d926d53f0698\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:16.335399       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6843/httpd\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:17.951048       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8352-2467/csi-mockplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:18.154676       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-8638/pod-init-10383a3f-6d5f-46fe-9f2c-495af60d1707\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:18.498188       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4575/update-demo-nautilus-pmtcd\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:18.514722       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4575/update-demo-nautilus-z58qd\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:19.514813       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878-1170/csi-hostpath-attacher-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:20.029982       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878-1170/csi-hostpathplugin-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:20.352181       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878-1170/csi-hostpath-provisioner-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:20.678182       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878-1170/csi-hostpath-resizer-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:21.006065       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878-1170/csi-hostpath-snapshotter-0\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:21.651513       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-9878/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0529 01:05:21.658504       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-9878/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0529 01:05:21.739458       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-1651/pod-projected-configmaps-5c53c664-685b-4d53-bbc5-994642dd27b8\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:22.315069       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5842/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-whlkb\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:22.388100       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-8109/pod-4b8d5e3a-97fc-4647-a982-84e152d74768\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:23.132481       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7339/pod-subpath-test-preprovisionedpv-cwl2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:23.675786       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878/hostpath-injector\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:26.539459       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2146/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:26.705307       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2146/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:26.867882       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2146/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:27.032617       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2146/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:27.109315       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2543/agnhost-primary-h6gkh\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:27.351659       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"hostpath-553/pod-host-path-test\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:27.685512       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3865/pod-subpath-test-dynamicpv-s4kf\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:29.864538       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8352/pvc-volume-tester-nx8dg\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:31.288224       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-9253/e2e-configmap-dns-server-c68889e3-1e6e-4804-a62a-478de3f256b9\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:33.805954       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-3123/server-envvars-a0af5f79-4b43-485c-b036-10f655a2938e\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:33.928536       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-9253/e2e-dns-utils\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:34.378342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6843/run-test\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:37.003395       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5842/pod-subpath-test-preprovisionedpv-v4bw\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:37.517279       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-8783/exec-volume-test-preprovisionedpv-lnn2\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:38.352907       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-435/pod-f9aebbc8-5b8e-4b1e-af97-5ffac22baf7e\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:38.610143       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-3123/client-envvars-dd5b3f9a-e3c0-485a-926c-1a41581d9b1f\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:39.847455       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-7040/pod-a2ba67cd-5f74-4c87-9264-1a1c3d662745\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:42.659594       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-2146/test-container-pod\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:42.884698       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-7969/security-context-e0855d13-ece1-4623-bc2b-af1e9d805da4\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:43.028620       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9067/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-8wkmw\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:43.407920       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8330-4798/csi-hostpath-attacher-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:43.892141       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8330-4798/csi-hostpathplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:44.230772       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8330-4798/csi-hostpath-provisioner-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:44.340436       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-3088/pod-configmaps-612fb063-8efb-4bf4-9895-22d289199e26\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:44.551563       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8330-4798/csi-hostpath-resizer-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:44.871412       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8330-4798/csi-hostpath-snapshotter-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:47.037673       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8965/pod-projected-secrets-c15860bf-9238-4372-9715-a19f91704116\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:47.173642       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-6068/pod-263593e1-e4f9-4f95-aac7-1b688b8ccdd4\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:48.110771       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2218-6343/csi-mockplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:48.119553       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8330/pod-subpath-test-dynamicpv-nbbg\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:48.292056       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2218-6343/csi-mockplugin-attacher-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:48.626968       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-371/pod-94d3b4b8-9d1d-43bc-8506-0ec319c05598\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:49.410669       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-350/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:05:49.415738       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-350/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:05:49.616146       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1634-6964/csi-hostpath-attacher-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:49.870931       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6843/run-test-2\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:50.100164       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1634-6964/csi-hostpathplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:50.422658       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1634-6964/csi-hostpath-provisioner-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:50.745213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1634-6964/csi-hostpath-resizer-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:51.064084       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1634-6964/csi-hostpath-snapshotter-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:51.354253       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-privileged-pod-2352/privileged-pod\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:51.676066       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-350/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:05:51.683884       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-1634/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0529 01:05:51.689752       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-1634/hostpath-injector\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0529 01:05:51.842024       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9952/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:51.913760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-4778/metadata-volume-bc361760-fcd7-403a-9895-130e21c962e8\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:52.000044       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9952/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:52.160901       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9952/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:52.324927       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9952/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:52.985701       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-1609/downwardapi-volume-79c21f2f-6e11-451b-93cc-be616ffbef94\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:53.571652       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2189/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-gw84l\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:54.007237       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9067/pod-901fab42-fed1-4c8f-ba47-765f71955b4f\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:54.591731       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9878/hostpath-client\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:54.692496       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1634/hostpath-injector\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:56.142908       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1917/external-provisioner-d52zc\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:05:56.220658       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-350/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:05:56.226788       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-350/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:05:58.679819       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-350/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:05:59.518864       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2218/pvc-volume-tester-c5bxw\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:05:59.650791       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6574/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-99fcr\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:00.028110       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-390/startup-b17aa671-00a6-4c7e-972c-73c69f7c2fe5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:00.254503       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2189/pod-32e3aa0e-4b21-4fcd-b142-816f1e0790e0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:00.833326       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9067/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-g2bgn\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:02.757630       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7892/pod-subpath-test-dynamicpv-tpbj\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:03.212632       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6843/run-test-3\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:03.226748       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-271-7015/csi-hostpath-attacher-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:03.722210       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-271-7015/csi-hostpathplugin-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:04.058137       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-271-7015/csi-hostpath-provisioner-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:04.363759       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2626/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-95n6n\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:04.393704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-271-7015/csi-hostpath-resizer-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:04.749151       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-271-7015/csi-hostpath-snapshotter-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:05.050728       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-271/inline-volume-tester-k5bhj\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:05.064380       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1271/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-7qzsx\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:08.034031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2189/pod-ea454978-14a8-41f4-bbe2-c2384a58383c\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:09.403292       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-5404/replace-1622250360-2fkbj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:17.302947       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6699/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-ncbnx\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:17.935364       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9952/test-container-pod\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:18.486206       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-1634/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0529 01:06:18.562298       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-1634/hostpath-client\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0529 01:06:18.836541       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1917/nfs-server\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:20.698808       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1634/hostpath-client\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:20.937135       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-9636/dns-test-2c5f2058-54b8-42a0-b082-8a448765e58b\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:22.031607       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-9969/backofflimit-pjbjs\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:22.207568       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1917/nfs-injector\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:22.722047       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1271/pod-subpath-test-preprovisionedpv-qx45\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:22.933212       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6574/pod-subpath-test-preprovisionedpv-smgf\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:23.980255       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3950/exec-volume-test-preprovisionedpv-bsq9\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:23.989735       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2626/pod-subpath-test-preprovisionedpv-sfwm\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:24.690872       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6518-3019/csi-hostpath-attacher-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:24.881120       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-9969/backofflimit-ww97b\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:25.182507       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6518-3019/csi-hostpathplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:25.506785       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6518-3019/csi-hostpath-provisioner-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:25.833522       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6518-3019/csi-hostpath-resizer-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:25.980966       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5462/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-xsppq\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:26.179085       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6518-3019/csi-hostpath-snapshotter-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:26.753922       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-3338/sample-webhook-deployment-6bd9446d55-9c4kj\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:27.942001       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-3138/security-context-d182e8ee-41aa-4429-bb5b-16e2a21bdeef\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.817470       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-6hwtm\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.833582       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-npwhj\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.833817       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-xt2q8\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.840269       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-lhf4j\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.840342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-5gvfd\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.840396       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-8zn9n\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.849986       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-k8hwz\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.863101       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-z26qm\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.873855       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-fs4f5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:28.873897       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-4424/simpletest.rc-89l9t\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:29.457000       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6518/pod-subpath-test-dynamicpv-d9mx\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:32.348534       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7862/exceed-active-deadline-4wlsv\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:32.358260       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7862/exceed-active-deadline-9qgs7\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:33.866724       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2626/pod-subpath-test-preprovisionedpv-sfwm\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:35.041268       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-667/pod-d9cc708d-5b3c-4f5c-b0ea-cfab09b1f550\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:35.748558       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-5955/pod-projected-configmaps-cc3ee429-de96-415a-af16-8f9bea07f289\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:36.374262       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7923/pod-c71fee7f-78e4-41cf-b79e-527b376df80f\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:36.426998       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-7635/security-context-52620299-123f-4872-b3b6-25ab8e19b619\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:37.164360       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5462/pod-subpath-test-preprovisionedpv-wnsx\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:38.038686       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6699/pod-subpath-test-preprovisionedpv-tcqg\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:38.246495       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8562/externalname-service-j6rhh\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:38.253575       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8562/externalname-service-jpsdx\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:39.173780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2284/pod-subpath-test-inlinevolume-pxnj\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:40.054192       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-3837/pod-projected-secrets-a7bd7af1-408c-4dc1-8a90-15c78a6667b7\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:40.193328       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-4926/my-hostname-basic-6efb7887-cddb-4c28-acf1-10832069ffce-9bqfl\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:45.359669       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-1812/busybox-scheduling-487bc6a9-c725-4695-a2e0-3d5c0d749fd7\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:45.744134       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9884/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-x5rc5\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:45.754482       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6518/pod-subpath-test-dynamicpv-d9mx\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:47.175231       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1917/nfs-client\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:47.604304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8562/execpodcvmdq\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:48.080897       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-8941/nfs-server\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:48.715976       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9086/pod-logs-websocket-45f5192b-93c7-4282-9b9f-1b16a3cc7739\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:48.815218       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-8392/pod-97aceae0-993c-435a-948d-0b53fc4d275a\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:50.442562       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-4167/pod-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:50.628950       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4868/update-demo-nautilus-2kwfx\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:50.636367       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4868/update-demo-nautilus-59fp5\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:51.468919       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-8663/pfpod\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:52.488681       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9884/pod-subpath-test-preprovisionedpv-4mr9\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:54.290148       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7305-9481/csi-hostpath-attacher-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:54.795896       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7305-9481/csi-hostpathplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:55.116883       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7305-9481/csi-hostpath-provisioner-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:55.445720       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7305-9481/csi-hostpath-resizer-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:55.756255       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7206/netserver-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:55.786818       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7305-9481/csi-hostpath-snapshotter-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:55.919442       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7206/netserver-1\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:56.081205       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7206/netserver-2\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:56.244360       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7206/netserver-3\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:56.713214       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8638-8905/csi-mockplugin-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:57.031901       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8638-8905/csi-mockplugin-resizer-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:57.046679       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-6294/startup-cdf72842-441c-4b38-8f79-23d6d5e2d9bc\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:57.475080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-8157/downwardapi-volume-1ceb840c-7c74-4132-8dc1-21904ea8ff5c\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:59.085362       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7305/pod-subpath-test-dynamicpv-q9z6\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:59.106791       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8719/downwardapi-volume-e5eef276-9929-4de9-8871-1aa21de23e8e\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:59.236171       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-4614/busybox-readonly-fsf15c1d47-7e6e-4d40-89bf-8f39d16f73ef\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:06:59.580068       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6084/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-r8hsb\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:06:59.895531       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-3049/liveness-f1defa6c-8112-44e0-b968-1c921b3ee6ba\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:02.244318       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8639/pod-subpath-test-dynamicpv-68nz\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:03.496604       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-9222/sample-webhook-deployment-6bd9446d55-8f2nf\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:03.819415       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8638/pvc-volume-tester-4tzs6\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:04.351252       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6084/pod-bb77e0e6-36c3-4604-bf0e-076ce9ec38b5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:07.837403       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-cpwks\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.853729       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-knj9m\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.853976       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-x4zps\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.868244       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-pkhf4\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.872051       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-c4h97\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.872147       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-6k496\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.872356       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-frtgz\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.875959       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-drmrd\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.876304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-v5jcc\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:07.876385       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6643/simpletest.rc-mc7j5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:08.958876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1639/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-fzf6r\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:11.331738       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3733/hostpath-injector\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:11.483750       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1623/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-hl4fq\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:13.454718       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7923/pod-694289db-462c-4559-ab19-487db50c4965\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:15.022237       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4868/update-demo-nautilus-f85sx\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:17.370088       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7955/pod-projected-configmaps-2da89cc6-8e3c-4ceb-b70e-145386cef936\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:18.005368       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7206/test-container-pod\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:18.898785       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-834/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-pps7x\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:19.542929       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2312/hostpathsymlink-injector\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:21.340927       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-6534/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:07:21.346193       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-6534/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:07:21.749816       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-5392/test-pod\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:22.329845       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2511/hostexec-ip-172-20-47-14.ap-northeast-2.compute.internal-4cc8s\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:22.604881       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1623/pod-subpath-test-preprovisionedpv-ks58\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:23.038138       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1639/pod-subpath-test-preprovisionedpv-tbrz\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:23.468892       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-8941/pvc-tester-db4tt\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:23.723227       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-6534/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity.\"\nI0529 01:07:23.989104       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-9837-7311/csi-hostpath-attacher-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:24.404504       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-5392/test-host-network-pod\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:24.516426       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-9837-7311/csi-hostpathplugin-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:24.822415       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-9837-7311/csi-hostpath-provisioner-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:25.159630       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-9837-7311/csi-hostpath-resizer-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:25.500034       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-9837-7311/csi-hostpath-snapshotter-0\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:28.236631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-3644/pod-93534eae-2a47-4977-9ecc-f5d99940d015\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:29.076677       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8380-9478/csi-mockplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:29.236272       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8380-9478/csi-mockplugin-attacher-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:29.656555       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-3733/hostpath-client\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:30.251452       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9965/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-qcsl5\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:31.958723       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2312/hostpathsymlink-client\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:35.839714       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8380/pvc-volume-tester-x6z5p\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:37.045930       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6602/hostexec-ip-172-20-58-248.ap-northeast-2.compute.internal-dz2kw\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:37.656593       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2511/pod-subpath-test-preprovisionedpv-vsfd\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:38.481186       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-3879/dns-test-3dd4d00a-9696-4074-8951-b944571a7b67\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:38.565621       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-834/exec-volume-test-preprovisionedpv-v464\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:39.042960       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4855/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-w7672\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:42.123301       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-4097/sample-webhook-deployment-6bd9446d55-z8d6p\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:43.624154       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-4005/test-new-deployment-dd94f59b7-2d8s8\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:44.816745       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-720/liveness-633749f5-3d5e-4288-bf30-c93607acce78\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:45.281197       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-8590/image-pull-testee0e70c8-88e6-43c8-b42e-ac3f2a9a785b\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:47.358805       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5909/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-t2x96\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:51.476399       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1360-1800/csi-mockplugin-0\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:52.351315       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4855/pod-subpath-test-preprovisionedpv-md9g\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:52.564341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6602/pod-subpath-test-preprovisionedpv-v9tm\" node=\"ip-172-20-58-248.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:53.084350       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-9853/pod-configmaps-7b5e443f-cc5e-49ed-b4a7-ea79ca61a367\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:54.033798       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5909/pod-subpath-test-preprovisionedpv-cpfr\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:54.450314       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-948/pod-projected-secrets-5b45c522-23ef-4011-b1d2-1ff89bce39a1\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:55.128348       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3509-8567/csi-hostpath-attacher-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:55.622398       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3509-8567/csi-hostpathplugin-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:55.794061       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-2626/pod-e407814d-f11f-4e90-b316-91237c7c4238\" node=\"ip-172-20-47-14.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:55.926148       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3509-8567/csi-hostpath-provisioner-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:56.040156       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4109/pod-0744d82d-03a0-456f-8990-d17960cad713\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:56.247457       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3509-8567/csi-hostpath-resizer-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:56.569917       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3509-8567/csi-hostpath-snapshotter-0\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:56.584393       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7184/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-xnsvc\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:58.774693       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-3731/alpine-nnp-false-d4cb3bb5-082f-4ed9-901a-8cd8bab9e83d\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0529 01:07:58.848972       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4109/hostexec-ip-172-20-52-235.ap-northeast-2.compute.internal-wlvsh\" node=\"ip-172-20-52-235.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:58.949910       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8218/hostexec-ip-172-20-33-144.ap-northeast-2.compute.internal-fcbpk\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0529 01:07:59.058593       1 volume_binding.go:260] Failed to bind volumes for pod \"csi-mock-volumes-1360/pvc-volume-tester-dh8cq\": binding volumes: provisioning failed for PVC \"pvc-4pjz8\"\nE0529 01:07:59.059209       1 framework.go:744] \"Failed running PreBind plugin\" err=\"binding volumes: provisioning failed for PVC \\\"pvc-4pjz8\\\"\" plugin=\"VolumeBinding\" pod=\"csi-mock-volumes-1360/pvc-volume-tester-dh8cq\"\nE0529 01:07:59.059272       1 factory.go:337] \"Error scheduling pod; retrying\" err=\"running PreBind plugin \\\"VolumeBinding\\\": binding volumes: provisioning failed for PVC \\\"pvc-4pjz8\\\"\" pod=\"csi-mock-volumes-1360/pvc-volume-tester-dh8cq\"\nI0529 01:07:59.827677       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3509/pod-70004431-cfb7-47a3-8cfb-b86cd2af4057\" node=\"ip-172-20-33-144.ap-northeast-2.compute.internal\" evaluatedNodes=5 feas