This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-05-22 07:03
Elapsed33m20s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0522 07:04:43.561399    4037 up.go:43] Cleaning up any leaked resources from previous cluster
I0522 07:04:43.561431    4037 dumplogs.go:38] /logs/artifacts/c844f0d7-bacb-11eb-b027-f2836e8f0ab3/kops toolbox dump --name e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user admin
I0522 07:04:43.577864    4056 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0522 07:04:43.578015    4056 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io" not found
W0522 07:04:44.420569    4037 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0522 07:04:44.420660    4037 down.go:48] /logs/artifacts/c844f0d7-bacb-11eb-b027-f2836e8f0ab3/kops delete cluster --name e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --yes
I0522 07:04:44.435579    4066 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0522 07:04:44.435644    4066 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io" not found
I0522 07:04:44.922701    4037 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/05/22 07:04:44 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0522 07:04:44.930580    4037 http.go:37] curl https://ip.jsb.workers.dev
I0522 07:04:45.023764    4037 up.go:144] /logs/artifacts/c844f0d7-bacb-11eb-b027-f2836e8f0ab3/kops create cluster --name e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.1 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=136693071363/debian-10-amd64-20210329-591 --channel=alpha --networking=cilium --container-runtime=containerd --admin-access 34.123.31.170/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-2a --master-size c5.large
I0522 07:04:45.038713    4076 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0522 07:04:45.038798    4076 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0522 07:04:45.081993    4076 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0522 07:04:45.567801    4076 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I0522 07:05:14.350940    4037 up.go:181] /logs/artifacts/c844f0d7-bacb-11eb-b027-f2836e8f0ab3/kops validate cluster --name e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0522 07:05:14.370042    4097 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0522 07:05:14.370157    4097 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io

W0522 07:05:15.829639    4097 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:05:25.861415    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:05:35.908567    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:05:45.940216    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:05:55.974964    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:06:06.020701    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:06:16.053036    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:06:26.083460    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:06:36.114595    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:06:46.162766    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:06:56.197717    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:07:06.250216    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:07:16.301343    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:07:26.331831    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:07:36.375990    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
W0522 07:07:46.396965    4097 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:07:56.426938    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:08:06.458233    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:08:16.492091    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:08:26.526245    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0522 07:08:36.557072    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 8 lines ...
Machine	i-094127254d58b1025				machine "i-094127254d58b1025" has not yet joined cluster
Machine	i-0b9275cce37678aeb				machine "i-0b9275cce37678aeb" has not yet joined cluster
Pod	kube-system/cilium-p87rf			system-node-critical pod "cilium-p87rf" is not ready (cilium-agent)
Pod	kube-system/coredns-autoscaler-6f594f4c58-hsjjc	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hsjjc" is pending
Pod	kube-system/coredns-f45c4bf76-7wxwz		system-cluster-critical pod "coredns-f45c4bf76-7wxwz" is pending

Validation Failed
W0522 07:08:50.462700    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 8 lines ...
Machine	i-094127254d58b1025				machine "i-094127254d58b1025" has not yet joined cluster
Machine	i-0b9275cce37678aeb				machine "i-0b9275cce37678aeb" has not yet joined cluster
Pod	kube-system/cilium-p87rf			system-node-critical pod "cilium-p87rf" is not ready (cilium-agent)
Pod	kube-system/coredns-autoscaler-6f594f4c58-hsjjc	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hsjjc" is pending
Pod	kube-system/coredns-f45c4bf76-7wxwz		system-cluster-critical pod "coredns-f45c4bf76-7wxwz" is pending

Validation Failed
W0522 07:09:03.266742    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 16 lines ...
Pod	kube-system/cilium-mbxrd				system-node-critical pod "cilium-mbxrd" is pending
Pod	kube-system/cilium-p59x2				system-node-critical pod "cilium-p59x2" is pending
Pod	kube-system/cilium-p87rf				system-node-critical pod "cilium-p87rf" is not ready (cilium-agent)
Pod	kube-system/coredns-autoscaler-6f594f4c58-hsjjc		system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hsjjc" is pending
Pod	kube-system/coredns-f45c4bf76-7wxwz			system-cluster-critical pod "coredns-f45c4bf76-7wxwz" is pending

Validation Failed
W0522 07:09:15.822390    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 15 lines ...
Pod	kube-system/cilium-mbxrd			system-node-critical pod "cilium-mbxrd" is not ready (cilium-agent)
Pod	kube-system/cilium-p59x2			system-node-critical pod "cilium-p59x2" is pending
Pod	kube-system/cilium-p87rf			system-node-critical pod "cilium-p87rf" is not ready (cilium-agent)
Pod	kube-system/coredns-autoscaler-6f594f4c58-hsjjc	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hsjjc" is pending
Pod	kube-system/coredns-f45c4bf76-7wxwz		system-cluster-critical pod "coredns-f45c4bf76-7wxwz" is pending

Validation Failed
W0522 07:09:28.476268    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 10 lines ...
Pod	kube-system/cilium-5pxrr			system-node-critical pod "cilium-5pxrr" is not ready (cilium-agent)
Pod	kube-system/cilium-mbxrd			system-node-critical pod "cilium-mbxrd" is not ready (cilium-agent)
Pod	kube-system/cilium-p59x2			system-node-critical pod "cilium-p59x2" is not ready (cilium-agent)
Pod	kube-system/cilium-p87rf			system-node-critical pod "cilium-p87rf" is not ready (cilium-agent)
Pod	kube-system/coredns-autoscaler-6f594f4c58-hsjjc	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hsjjc" is pending

Validation Failed
W0522 07:09:41.071793    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 6 lines ...
ip-172-20-63-92.ap-northeast-2.compute.internal		node	True

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/cilium-p87rf	system-node-critical pod "cilium-p87rf" is not ready (cilium-agent)

Validation Failed
W0522 07:09:53.682888    4097 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 503 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 581 lines ...
STEP: Destroying namespace "pod-disks-7485" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.412 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:32.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1021" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:32.819: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:32.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9753" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":1,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:33.058: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:32.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-5923" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:34.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-9830" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":1,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:34.745: INFO: Only supported for providers [gce gke] (not aws)
... skipping 47 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 128 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:34.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6139" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":2,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:35.989: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:38.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2913" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:38.910: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 65 lines ...
• [SLOW TEST:11.072 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:40.805: INFO: Only supported for providers [gce gke] (not aws)
... skipping 45 lines ...
May 22 07:12:30.364: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
STEP: Creating configMap with name configmap-test-volume-da81ba1f-3ea1-423c-ac64-0a3c9eb6b9e0
STEP: Creating a pod to test consume configMaps
May 22 07:12:30.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7" in namespace "configmap-604" to be "Succeeded or Failed"
May 22 07:12:31.153: INFO: Pod "pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 157.183811ms
May 22 07:12:33.311: INFO: Pod "pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315322004s
May 22 07:12:35.470: INFO: Pod "pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473894165s
May 22 07:12:37.629: INFO: Pod "pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632646554s
May 22 07:12:39.793: INFO: Pod "pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.79654357s
STEP: Saw pod success
May 22 07:12:39.793: INFO: Pod "pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7" satisfied condition "Succeeded or Failed"
May 22 07:12:39.950: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7 container agnhost-container: <nil>
STEP: delete the pod
May 22 07:12:40.286: INFO: Waiting for pod pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7 to disappear
May 22 07:12:40.444: INFO: Pod pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.189 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:40.930: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 208 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
May 22 07:12:30.709: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 22 07:12:30.709: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9782
STEP: Creating a pod to test subpath
May 22 07:12:30.871: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9782" in namespace "provisioning-8718" to be "Succeeded or Failed"
May 22 07:12:31.030: INFO: Pod "pod-subpath-test-inlinevolume-9782": Phase="Pending", Reason="", readiness=false. Elapsed: 159.282537ms
May 22 07:12:33.190: INFO: Pod "pod-subpath-test-inlinevolume-9782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318956982s
May 22 07:12:35.351: INFO: Pod "pod-subpath-test-inlinevolume-9782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480032486s
May 22 07:12:37.511: INFO: Pod "pod-subpath-test-inlinevolume-9782": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640020533s
May 22 07:12:39.672: INFO: Pod "pod-subpath-test-inlinevolume-9782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.800863882s
STEP: Saw pod success
May 22 07:12:39.672: INFO: Pod "pod-subpath-test-inlinevolume-9782" satisfied condition "Succeeded or Failed"
May 22 07:12:39.834: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-9782 container test-container-volume-inlinevolume-9782: <nil>
STEP: delete the pod
May 22 07:12:40.189: INFO: Waiting for pod pod-subpath-test-inlinevolume-9782 to disappear
May 22 07:12:40.348: INFO: Pod pod-subpath-test-inlinevolume-9782 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9782
May 22 07:12:40.349: INFO: Deleting pod "pod-subpath-test-inlinevolume-9782" in namespace "provisioning-8718"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:41.155: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 136 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 9 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:12:32.172: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0" in namespace "downward-api-3926" to be "Succeeded or Failed"
May 22 07:12:32.329: INFO: Pod "downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0": Phase="Pending", Reason="", readiness=false. Elapsed: 157.393291ms
May 22 07:12:34.487: INFO: Pod "downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315630943s
May 22 07:12:36.646: INFO: Pod "downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474592676s
May 22 07:12:38.804: INFO: Pod "downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632780251s
May 22 07:12:40.962: INFO: Pod "downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.790485619s
STEP: Saw pod success
May 22 07:12:40.962: INFO: Pod "downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0" satisfied condition "Succeeded or Failed"
May 22 07:12:41.120: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0 container client-container: <nil>
STEP: delete the pod
May 22 07:12:41.455: INFO: Waiting for pod downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0 to disappear
May 22 07:12:41.613: INFO: Pod downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.228 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:42.103: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 39 lines ...
• [SLOW TEST:12.726 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:42.621: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:43.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-9503" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:43.369: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
STEP: Creating a kubernetes client
May 22 07:12:29.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
W0522 07:12:30.411935    4773 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 22 07:12:30.412: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:43.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6932" for this suite.


• [SLOW TEST:13.919 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 85 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286

      Disabled temporarily, reopen after #73168 is fixed

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:44.232: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:44.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4355" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:45.248: INFO: Only supported for providers [openstack] (not aws)
... skipping 45 lines ...
• [SLOW TEST:16.030 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:45.456: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
May 22 07:12:34.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
May 22 07:12:35.797: INFO: Waiting up to 5m0s for pod "pod-2a29f885-6bde-44e8-b3f9-940016a6263d" in namespace "emptydir-9358" to be "Succeeded or Failed"
May 22 07:12:35.955: INFO: Pod "pod-2a29f885-6bde-44e8-b3f9-940016a6263d": Phase="Pending", Reason="", readiness=false. Elapsed: 157.287452ms
May 22 07:12:38.115: INFO: Pod "pod-2a29f885-6bde-44e8-b3f9-940016a6263d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31796829s
May 22 07:12:40.274: INFO: Pod "pod-2a29f885-6bde-44e8-b3f9-940016a6263d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476422018s
May 22 07:12:42.432: INFO: Pod "pod-2a29f885-6bde-44e8-b3f9-940016a6263d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635067373s
May 22 07:12:44.591: INFO: Pod "pod-2a29f885-6bde-44e8-b3f9-940016a6263d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.793708617s
STEP: Saw pod success
May 22 07:12:44.591: INFO: Pod "pod-2a29f885-6bde-44e8-b3f9-940016a6263d" satisfied condition "Succeeded or Failed"
May 22 07:12:44.748: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-2a29f885-6bde-44e8-b3f9-940016a6263d container test-container: <nil>
STEP: delete the pod
May 22 07:12:45.070: INFO: Waiting for pod pod-2a29f885-6bde-44e8-b3f9-940016a6263d to disappear
May 22 07:12:45.228: INFO: Pod pod-2a29f885-6bde-44e8-b3f9-940016a6263d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.700 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:46.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3052" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:18.133 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:48.023: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
• [SLOW TEST:19.630 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
May 22 07:12:38.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
May 22 07:12:39.739: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 22 07:12:40.083: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3004" in namespace "provisioning-3004" to be "Succeeded or Failed"
May 22 07:12:40.242: INFO: Pod "hostpath-symlink-prep-provisioning-3004": Phase="Pending", Reason="", readiness=false. Elapsed: 159.49401ms
May 22 07:12:42.400: INFO: Pod "hostpath-symlink-prep-provisioning-3004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317415713s
STEP: Saw pod success
May 22 07:12:42.400: INFO: Pod "hostpath-symlink-prep-provisioning-3004" satisfied condition "Succeeded or Failed"
May 22 07:12:42.400: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3004" in namespace "provisioning-3004"
May 22 07:12:42.567: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3004" to be fully deleted
May 22 07:12:42.725: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-mwcx
STEP: Creating a pod to test subpath
May 22 07:12:42.883: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-mwcx" in namespace "provisioning-3004" to be "Succeeded or Failed"
May 22 07:12:43.040: INFO: Pod "pod-subpath-test-inlinevolume-mwcx": Phase="Pending", Reason="", readiness=false. Elapsed: 157.036364ms
May 22 07:12:45.199: INFO: Pod "pod-subpath-test-inlinevolume-mwcx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316205411s
May 22 07:12:47.359: INFO: Pod "pod-subpath-test-inlinevolume-mwcx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.476062511s
STEP: Saw pod success
May 22 07:12:47.359: INFO: Pod "pod-subpath-test-inlinevolume-mwcx" satisfied condition "Succeeded or Failed"
May 22 07:12:47.517: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-mwcx container test-container-subpath-inlinevolume-mwcx: <nil>
STEP: delete the pod
May 22 07:12:47.863: INFO: Waiting for pod pod-subpath-test-inlinevolume-mwcx to disappear
May 22 07:12:48.020: INFO: Pod pod-subpath-test-inlinevolume-mwcx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-mwcx
May 22 07:12:48.020: INFO: Deleting pod "pod-subpath-test-inlinevolume-mwcx" in namespace "provisioning-3004"
STEP: Deleting pod
May 22 07:12:48.177: INFO: Deleting pod "pod-subpath-test-inlinevolume-mwcx" in namespace "provisioning-3004"
May 22 07:12:48.492: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3004" in namespace "provisioning-3004" to be "Succeeded or Failed"
May 22 07:12:48.649: INFO: Pod "hostpath-symlink-prep-provisioning-3004": Phase="Pending", Reason="", readiness=false. Elapsed: 157.141831ms
May 22 07:12:50.807: INFO: Pod "hostpath-symlink-prep-provisioning-3004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.314608911s
STEP: Saw pod success
May 22 07:12:50.807: INFO: Pod "hostpath-symlink-prep-provisioning-3004" satisfied condition "Succeeded or Failed"
May 22 07:12:50.807: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3004" in namespace "provisioning-3004"
May 22 07:12:50.972: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3004" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:51.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3004" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":16,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-4482f45c-68ea-4719-9c96-3fd79fee5574
STEP: Creating a pod to test consume configMaps
May 22 07:12:46.711: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc" in namespace "projected-689" to be "Succeeded or Failed"
May 22 07:12:46.869: INFO: Pod "pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc": Phase="Pending", Reason="", readiness=false. Elapsed: 157.24927ms
May 22 07:12:49.027: INFO: Pod "pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315219013s
May 22 07:12:51.187: INFO: Pod "pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.475495367s
STEP: Saw pod success
May 22 07:12:51.187: INFO: Pod "pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc" satisfied condition "Succeeded or Failed"
May 22 07:12:51.346: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc container agnhost-container: <nil>
STEP: delete the pod
May 22 07:12:51.673: INFO: Waiting for pod pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc to disappear
May 22 07:12:51.830: INFO: Pod pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.560 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:52.164: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":32,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:12:50.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:52.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1535" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:12:51.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:12:53.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7941" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":4,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:12:52.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
May 22 07:12:53.300: INFO: found topology map[topology.kubernetes.io/zone:ap-northeast-2a]
May 22 07:12:53.300: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
May 22 07:12:53.300: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 46 lines ...
• [SLOW TEST:6.676 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:54.727: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
May 22 07:12:48.086: INFO: Waiting up to 5m0s for pod "metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab" in namespace "downward-api-1957" to be "Succeeded or Failed"
May 22 07:12:48.243: INFO: Pod "metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab": Phase="Pending", Reason="", readiness=false. Elapsed: 156.628268ms
May 22 07:12:50.402: INFO: Pod "metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31579506s
May 22 07:12:52.559: INFO: Pod "metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472699538s
May 22 07:12:54.716: INFO: Pod "metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629240265s
STEP: Saw pod success
May 22 07:12:54.716: INFO: Pod "metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab" satisfied condition "Succeeded or Failed"
May 22 07:12:54.872: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab container client-container: <nil>
STEP: delete the pod
May 22 07:12:55.192: INFO: Waiting for pod metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab to disappear
May 22 07:12:55.348: INFO: Pod metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.517 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:55.678: INFO: Only supported for providers [vsphere] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:12:50.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d" in namespace "projected-8891" to be "Succeeded or Failed"
May 22 07:12:50.551: INFO: Pod "downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d": Phase="Pending", Reason="", readiness=false. Elapsed: 157.130479ms
May 22 07:12:52.707: INFO: Pod "downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313374244s
May 22 07:12:54.864: INFO: Pod "downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469701019s
May 22 07:12:57.021: INFO: Pod "downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.626948349s
STEP: Saw pod success
May 22 07:12:57.021: INFO: Pod "downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d" satisfied condition "Succeeded or Failed"
May 22 07:12:57.178: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d container client-container: <nil>
STEP: delete the pod
May 22 07:12:57.501: INFO: Waiting for pod downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d to disappear
May 22 07:12:57.658: INFO: Pod downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.533 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 14 lines ...
May 22 07:12:46.129: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4bjks] to have phase Bound
May 22 07:12:46.290: INFO: PersistentVolumeClaim pvc-4bjks found and phase=Bound (161.85547ms)
May 22 07:12:46.291: INFO: Waiting up to 3m0s for PersistentVolume local-5dwwx to have phase Bound
May 22 07:12:46.448: INFO: PersistentVolume local-5dwwx found and phase=Bound (157.431155ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vz7z
STEP: Creating a pod to test subpath
May 22 07:12:46.921: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vz7z" in namespace "provisioning-5975" to be "Succeeded or Failed"
May 22 07:12:47.082: INFO: Pod "pod-subpath-test-preprovisionedpv-vz7z": Phase="Pending", Reason="", readiness=false. Elapsed: 160.583004ms
May 22 07:12:49.246: INFO: Pod "pod-subpath-test-preprovisionedpv-vz7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324425791s
May 22 07:12:51.404: INFO: Pod "pod-subpath-test-preprovisionedpv-vz7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482837464s
May 22 07:12:53.562: INFO: Pod "pod-subpath-test-preprovisionedpv-vz7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640945208s
May 22 07:12:55.721: INFO: Pod "pod-subpath-test-preprovisionedpv-vz7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.799546039s
STEP: Saw pod success
May 22 07:12:55.721: INFO: Pod "pod-subpath-test-preprovisionedpv-vz7z" satisfied condition "Succeeded or Failed"
May 22 07:12:55.882: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-vz7z container test-container-subpath-preprovisionedpv-vz7z: <nil>
STEP: delete the pod
May 22 07:12:56.212: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vz7z to disappear
May 22 07:12:56.369: INFO: Pod pod-subpath-test-preprovisionedpv-vz7z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vz7z
May 22 07:12:56.369: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vz7z" in namespace "provisioning-5975"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":29,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:12:58.987: INFO: Only supported for providers [gce gke] (not aws)
... skipping 134 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":1,"skipped":13,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:00.383: INFO: Driver nfs doesn't support ext4 -- skipping
... skipping 48 lines ...
• [SLOW TEST:9.693 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:04.470: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 30 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
May 22 07:12:30.723: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 22 07:12:30.723: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zjsx
STEP: Creating a pod to test atomic-volume-subpath
May 22 07:12:30.891: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zjsx" in namespace "provisioning-6828" to be "Succeeded or Failed"
May 22 07:12:31.048: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Pending", Reason="", readiness=false. Elapsed: 156.802999ms
May 22 07:12:33.213: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321287625s
May 22 07:12:35.371: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479936014s
May 22 07:12:37.529: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.6379972s
May 22 07:12:39.687: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795730359s
May 22 07:12:41.845: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Running", Reason="", readiness=true. Elapsed: 10.954096209s
... skipping 5 lines ...
May 22 07:12:54.812: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Running", Reason="", readiness=true. Elapsed: 23.920790676s
May 22 07:12:56.971: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Running", Reason="", readiness=true. Elapsed: 26.07989035s
May 22 07:12:59.133: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Running", Reason="", readiness=true. Elapsed: 28.241199136s
May 22 07:13:01.296: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Running", Reason="", readiness=true. Elapsed: 30.404684326s
May 22 07:13:03.453: INFO: Pod "pod-subpath-test-inlinevolume-zjsx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.562022935s
STEP: Saw pod success
May 22 07:13:03.453: INFO: Pod "pod-subpath-test-inlinevolume-zjsx" satisfied condition "Succeeded or Failed"
May 22 07:13:03.610: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-zjsx container test-container-subpath-inlinevolume-zjsx: <nil>
STEP: delete the pod
May 22 07:13:03.932: INFO: Waiting for pod pod-subpath-test-inlinevolume-zjsx to disappear
May 22 07:13:04.089: INFO: Pod pod-subpath-test-inlinevolume-zjsx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zjsx
May 22 07:13:04.089: INFO: Deleting pod "pod-subpath-test-inlinevolume-zjsx" in namespace "provisioning-6828"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:13:01.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1" in namespace "projected-3212" to be "Succeeded or Failed"
May 22 07:13:01.529: INFO: Pod "downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1": Phase="Pending", Reason="", readiness=false. Elapsed: 162.124846ms
May 22 07:13:03.692: INFO: Pod "downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324805624s
May 22 07:13:05.858: INFO: Pod "downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.490608272s
STEP: Saw pod success
May 22 07:13:05.858: INFO: Pod "downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1" satisfied condition "Succeeded or Failed"
May 22 07:13:06.021: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1 container client-container: <nil>
STEP: delete the pod
May 22 07:13:06.352: INFO: Waiting for pod downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1 to disappear
May 22 07:13:06.520: INFO: Pod downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.469 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:06.893: INFO: Only supported for providers [gce gke] (not aws)
... skipping 112 lines ...
• [SLOW TEST:15.741 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:07.315: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 167 lines ...
May 22 07:12:45.106: INFO: PersistentVolumeClaim pvc-rgzvr found but phase is Pending instead of Bound.
May 22 07:12:47.262: INFO: PersistentVolumeClaim pvc-rgzvr found and phase=Bound (2.30952177s)
May 22 07:12:47.262: INFO: Waiting up to 3m0s for PersistentVolume local-s98t8 to have phase Bound
May 22 07:12:47.416: INFO: PersistentVolume local-s98t8 found and phase=Bound (153.988693ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-s7hz
STEP: Creating a pod to test subpath
May 22 07:12:47.889: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-s7hz" in namespace "provisioning-5979" to be "Succeeded or Failed"
May 22 07:12:48.043: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Pending", Reason="", readiness=false. Elapsed: 153.87404ms
May 22 07:12:50.198: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308274241s
May 22 07:12:52.356: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.466782616s
May 22 07:12:54.515: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.625373899s
May 22 07:12:56.674: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.785161813s
STEP: Saw pod success
May 22 07:12:56.675: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz" satisfied condition "Succeeded or Failed"
May 22 07:12:56.830: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-s7hz container test-container-subpath-preprovisionedpv-s7hz: <nil>
STEP: delete the pod
May 22 07:12:57.145: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-s7hz to disappear
May 22 07:12:57.299: INFO: Pod pod-subpath-test-preprovisionedpv-s7hz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-s7hz
May 22 07:12:57.299: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-s7hz" in namespace "provisioning-5979"
STEP: Creating pod pod-subpath-test-preprovisionedpv-s7hz
STEP: Creating a pod to test subpath
May 22 07:12:57.616: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-s7hz" in namespace "provisioning-5979" to be "Succeeded or Failed"
May 22 07:12:57.770: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Pending", Reason="", readiness=false. Elapsed: 153.829115ms
May 22 07:12:59.929: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313057577s
May 22 07:13:02.085: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468580079s
May 22 07:13:04.243: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.626311473s
STEP: Saw pod success
May 22 07:13:04.243: INFO: Pod "pod-subpath-test-preprovisionedpv-s7hz" satisfied condition "Succeeded or Failed"
May 22 07:13:04.397: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-s7hz container test-container-subpath-preprovisionedpv-s7hz: <nil>
STEP: delete the pod
May 22 07:13:04.711: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-s7hz to disappear
May 22 07:13:04.865: INFO: Pod pod-subpath-test-preprovisionedpv-s7hz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-s7hz
May 22 07:13:04.865: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-s7hz" in namespace "provisioning-5979"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":14,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
May 22 07:13:00.539: INFO: PersistentVolumeClaim pvc-prgv7 found but phase is Pending instead of Bound.
May 22 07:13:02.696: INFO: PersistentVolumeClaim pvc-prgv7 found and phase=Bound (15.277016947s)
May 22 07:13:02.696: INFO: Waiting up to 3m0s for PersistentVolume local-4mnfz to have phase Bound
May 22 07:13:02.852: INFO: PersistentVolume local-4mnfz found and phase=Bound (156.220474ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-g658
STEP: Creating a pod to test subpath
May 22 07:13:03.322: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-g658" in namespace "provisioning-6236" to be "Succeeded or Failed"
May 22 07:13:03.478: INFO: Pod "pod-subpath-test-preprovisionedpv-g658": Phase="Pending", Reason="", readiness=false. Elapsed: 156.235813ms
May 22 07:13:05.648: INFO: Pod "pod-subpath-test-preprovisionedpv-g658": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325766402s
May 22 07:13:07.805: INFO: Pod "pod-subpath-test-preprovisionedpv-g658": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.483103973s
STEP: Saw pod success
May 22 07:13:07.805: INFO: Pod "pod-subpath-test-preprovisionedpv-g658" satisfied condition "Succeeded or Failed"
May 22 07:13:07.962: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-g658 container test-container-volume-preprovisionedpv-g658: <nil>
STEP: delete the pod
May 22 07:13:08.283: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-g658 to disappear
May 22 07:13:08.440: INFO: Pod pod-subpath-test-preprovisionedpv-g658 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-g658
May 22 07:13:08.440: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-g658" in namespace "provisioning-6236"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:10.767: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:12:59.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
May 22 07:13:07.175: INFO: The status of Pod pod-update-activedeadlineseconds-c14d8109-ea9f-4076-b398-510ba8afca43 is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 22 07:13:08.323: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c14d8109-ea9f-4076-b398-510ba8afca43"
May 22 07:13:08.323: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c14d8109-ea9f-4076-b398-510ba8afca43" in namespace "pods-1240" to be "terminated due to deadline exceeded"
May 22 07:13:08.483: INFO: Pod "pod-update-activedeadlineseconds-c14d8109-ea9f-4076-b398-510ba8afca43": Phase="Running", Reason="", readiness=true. Elapsed: 160.302759ms
May 22 07:13:10.644: INFO: Pod "pod-update-activedeadlineseconds-c14d8109-ea9f-4076-b398-510ba8afca43": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.321204565s
May 22 07:13:10.644: INFO: Pod "pod-update-activedeadlineseconds-c14d8109-ea9f-4076-b398-510ba8afca43" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:10.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1240" for this suite.


• [SLOW TEST:11.176 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:13:07.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
May 22 07:13:08.182: INFO: Waiting up to 5m0s for pod "downward-api-9b470d72-496a-4ed8-b064-c25cef573057" in namespace "downward-api-8574" to be "Succeeded or Failed"
May 22 07:13:08.339: INFO: Pod "downward-api-9b470d72-496a-4ed8-b064-c25cef573057": Phase="Pending", Reason="", readiness=false. Elapsed: 157.223507ms
May 22 07:13:10.497: INFO: Pod "downward-api-9b470d72-496a-4ed8-b064-c25cef573057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.315627703s
STEP: Saw pod success
May 22 07:13:10.497: INFO: Pod "downward-api-9b470d72-496a-4ed8-b064-c25cef573057" satisfied condition "Succeeded or Failed"
May 22 07:13:10.655: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod downward-api-9b470d72-496a-4ed8-b064-c25cef573057 container dapi-container: <nil>
STEP: delete the pod
May 22 07:13:10.977: INFO: Waiting for pod downward-api-9b470d72-496a-4ed8-b064-c25cef573057 to disappear
May 22 07:13:11.134: INFO: Pod downward-api-9b470d72-496a-4ed8-b064-c25cef573057 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:11.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8574" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:11.462: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-12c0a0b0-e338-46f6-92ab-b74dd4db9906
STEP: Creating a pod to test consume secrets
May 22 07:13:08.468: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4610ca46-6ad9-4805-8d51-19ffa8f48c86" in namespace "projected-9035" to be "Succeeded or Failed"
May 22 07:13:08.628: INFO: Pod "pod-projected-secrets-4610ca46-6ad9-4805-8d51-19ffa8f48c86": Phase="Pending", Reason="", readiness=false. Elapsed: 159.923812ms
May 22 07:13:10.789: INFO: Pod "pod-projected-secrets-4610ca46-6ad9-4805-8d51-19ffa8f48c86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321170822s
STEP: Saw pod success
May 22 07:13:10.789: INFO: Pod "pod-projected-secrets-4610ca46-6ad9-4805-8d51-19ffa8f48c86" satisfied condition "Succeeded or Failed"
May 22 07:13:10.950: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-projected-secrets-4610ca46-6ad9-4805-8d51-19ffa8f48c86 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 22 07:13:11.280: INFO: Waiting for pod pod-projected-secrets-4610ca46-6ad9-4805-8d51-19ffa8f48c86 to disappear
May 22 07:13:11.441: INFO: Pod pod-projected-secrets-4610ca46-6ad9-4805-8d51-19ffa8f48c86 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:11.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9035" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:11.772: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 56 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 38 lines ...
STEP: Destroying namespace "apply-6234" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":3,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:13.780: INFO: Only supported for providers [openstack] (not aws)
... skipping 64 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:15.232: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 112 lines ...
[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105
STEP: Creating service test in namespace statefulset-7997
[It] should adopt matching orphans and release non-matching pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:165
STEP: Creating statefulset ss in namespace statefulset-7997
May 22 07:13:14.950: INFO: error finding default storageClass : No default storage class found
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
May 22 07:13:14.950: INFO: Deleting all statefulset in ns statefulset-7997
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:15.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should adopt matching orphans and release non-matching pods [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:165

    error finding default storageClass : No default storage class found

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
... skipping 42 lines ...
May 22 07:12:52.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:12:52.310: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:12:53.428: INFO: Unable to read jessie_udp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:12:53.586: INFO: Unable to read jessie_tcp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:12:53.745: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:12:53.903: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:12:54.861: INFO: Lookups using dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2 failed for: [wheezy_udp@dns-test-service.dns-5168.svc.cluster.local wheezy_tcp@dns-test-service.dns-5168.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local jessie_udp@dns-test-service.dns-5168.svc.cluster.local jessie_tcp@dns-test-service.dns-5168.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local]

May 22 07:13:00.020: INFO: Unable to read wheezy_udp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:00.179: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:00.337: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:00.496: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:01.626: INFO: Unable to read jessie_udp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:01.785: INFO: Unable to read jessie_tcp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:02.002: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:02.170: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:03.126: INFO: Lookups using dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2 failed for: [wheezy_udp@dns-test-service.dns-5168.svc.cluster.local wheezy_tcp@dns-test-service.dns-5168.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local jessie_udp@dns-test-service.dns-5168.svc.cluster.local jessie_tcp@dns-test-service.dns-5168.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local]

May 22 07:13:05.028: INFO: Unable to read wheezy_udp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:05.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:05.349: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:05.515: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:06.644: INFO: Unable to read jessie_udp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:06.804: INFO: Unable to read jessie_tcp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:06.962: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:07.185: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:08.139: INFO: Lookups using dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2 failed for: [wheezy_udp@dns-test-service.dns-5168.svc.cluster.local wheezy_tcp@dns-test-service.dns-5168.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local jessie_udp@dns-test-service.dns-5168.svc.cluster.local jessie_tcp@dns-test-service.dns-5168.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5168.svc.cluster.local]

May 22 07:13:10.026: INFO: Unable to read wheezy_udp@dns-test-service.dns-5168.svc.cluster.local from pod dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2: the server could not find the requested resource (get pods dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2)
May 22 07:13:13.054: INFO: Lookups using dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2 failed for: [wheezy_udp@dns-test-service.dns-5168.svc.cluster.local]

May 22 07:13:18.036: INFO: DNS probes using dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:49.080 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":1,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:19.027: INFO: Only supported for providers [openstack] (not aws)
... skipping 63 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:19.048: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":3,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:19.482: INFO: Only supported for providers [gce gke] (not aws)
... skipping 88 lines ...
May 22 07:13:19.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 22 07:13:20.067: INFO: Waiting up to 5m0s for pod "pod-307f77eb-c59b-46a9-8515-a3d4a5e0505b" in namespace "emptydir-3782" to be "Succeeded or Failed"
May 22 07:13:20.231: INFO: Pod "pod-307f77eb-c59b-46a9-8515-a3d4a5e0505b": Phase="Pending", Reason="", readiness=false. Elapsed: 163.126756ms
May 22 07:13:22.391: INFO: Pod "pod-307f77eb-c59b-46a9-8515-a3d4a5e0505b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.323676992s
STEP: Saw pod success
May 22 07:13:22.391: INFO: Pod "pod-307f77eb-c59b-46a9-8515-a3d4a5e0505b" satisfied condition "Succeeded or Failed"
May 22 07:13:22.551: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-307f77eb-c59b-46a9-8515-a3d4a5e0505b container test-container: <nil>
STEP: delete the pod
May 22 07:13:22.884: INFO: Waiting for pod pod-307f77eb-c59b-46a9-8515-a3d4a5e0505b to disappear
May 22 07:13:23.044: INFO: Pod pod-307f77eb-c59b-46a9-8515-a3d4a5e0505b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:23.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3782" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:23.384: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 82 lines ...
May 22 07:13:17.604: INFO: Got stdout from 13.125.198.209:22: Hello from admin@ip-172-20-63-92
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
May 22 07:13:21.252: INFO: Got stdout from 3.35.219.252:22: stdout
May 22 07:13:21.253: INFO: Got stderr from 3.35.219.252:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing admin@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:26.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-277" for this suite.


• [SLOW TEST:19.015 seconds]
[sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":3,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
May 22 07:13:14.480: INFO: PersistentVolumeClaim pvc-9hjks found but phase is Pending instead of Bound.
May 22 07:13:16.639: INFO: PersistentVolumeClaim pvc-9hjks found and phase=Bound (8.795841592s)
May 22 07:13:16.639: INFO: Waiting up to 3m0s for PersistentVolume local-2tj85 to have phase Bound
May 22 07:13:16.798: INFO: PersistentVolume local-2tj85 found and phase=Bound (159.45171ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-k5s9
STEP: Creating a pod to test subpath
May 22 07:13:17.275: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k5s9" in namespace "provisioning-3584" to be "Succeeded or Failed"
May 22 07:13:17.433: INFO: Pod "pod-subpath-test-preprovisionedpv-k5s9": Phase="Pending", Reason="", readiness=false. Elapsed: 157.595703ms
May 22 07:13:19.596: INFO: Pod "pod-subpath-test-preprovisionedpv-k5s9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320430227s
May 22 07:13:21.755: INFO: Pod "pod-subpath-test-preprovisionedpv-k5s9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.479171009s
STEP: Saw pod success
May 22 07:13:21.755: INFO: Pod "pod-subpath-test-preprovisionedpv-k5s9" satisfied condition "Succeeded or Failed"
May 22 07:13:21.912: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-k5s9 container test-container-subpath-preprovisionedpv-k5s9: <nil>
STEP: delete the pod
May 22 07:13:22.243: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k5s9 to disappear
May 22 07:13:22.400: INFO: Pod pod-subpath-test-preprovisionedpv-k5s9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k5s9
May 22 07:13:22.400: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k5s9" in namespace "provisioning-3584"
STEP: Creating pod pod-subpath-test-preprovisionedpv-k5s9
STEP: Creating a pod to test subpath
May 22 07:13:22.717: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k5s9" in namespace "provisioning-3584" to be "Succeeded or Failed"
May 22 07:13:22.874: INFO: Pod "pod-subpath-test-preprovisionedpv-k5s9": Phase="Pending", Reason="", readiness=false. Elapsed: 157.356655ms
May 22 07:13:25.032: INFO: Pod "pod-subpath-test-preprovisionedpv-k5s9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.315254661s
STEP: Saw pod success
May 22 07:13:25.032: INFO: Pod "pod-subpath-test-preprovisionedpv-k5s9" satisfied condition "Succeeded or Failed"
May 22 07:13:25.190: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-k5s9 container test-container-subpath-preprovisionedpv-k5s9: <nil>
STEP: delete the pod
May 22 07:13:25.520: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k5s9 to disappear
May 22 07:13:25.678: INFO: Pod pod-subpath-test-preprovisionedpv-k5s9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k5s9
May 22 07:13:25.678: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k5s9" in namespace "provisioning-3584"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:27.918: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 105 lines ...
STEP: Destroying namespace "apply-4615" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":4,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 114 lines ...
STEP: creating an object not containing a namespace with in-cluster config
May 22 07:13:19.709: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5554 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
May 22 07:13:21.595: INFO: rc: 255
STEP: trying to use kubectl with invalid token
May 22 07:13:21.596: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5554 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
May 22 07:13:23.354: INFO: rc: 255
May 22 07:13:23.355: INFO: got err error running /tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5554 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0522 07:13:23.160730     196 merged_client_builder.go:163] Using in-cluster namespace
I0522 07:13:23.161132     196 merged_client_builder.go:121] Using in-cluster configuration
I0522 07:13:23.163627     196 merged_client_builder.go:121] Using in-cluster configuration
I0522 07:13:23.171175     196 merged_client_builder.go:121] Using in-cluster configuration
I0522 07:13:23.171524     196 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-5554/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0522 07:13:23.177511     196 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc0004441c0, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x316f740, 0xc000000003, 0x0, 0x0, 0xc0006d4fc0, 0x26cc9dc, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x316f740, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0008e4b80, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0006c6cc0, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x21357e0, 0xc00000d3c8, 0x1fb10a8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004b42c0, 0xc000148e40, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
May 22 07:13:23.355: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5554 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
May 22 07:13:25.117: INFO: rc: 255
May 22 07:13:25.117: INFO: got err error running /tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5554 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0522 07:13:24.919863     208 merged_client_builder.go:163] Using in-cluster namespace
I0522 07:13:24.929383     208 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 9 milliseconds
I0522 07:13:24.929508     208 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0522 07:13:24.931605     208 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0522 07:13:24.931657     208 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0522 07:13:24.931706     208 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0522 07:13:24.933699     208 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0522 07:13:24.933848     208 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0522 07:13:24.935665     208 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0522 07:13:24.935721     208 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0522 07:13:24.945010     208 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 9 milliseconds
I0522 07:13:24.945285     208 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0522 07:13:24.945516     208 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
F0522 07:13:24.945683     208 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc00002ca80, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x316f740, 0xc000000003, 0x0, 0x0, 0xc00077afc0, 0x26cc9dc, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x316f740, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0005649d0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000490e0, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2134b00, 0xc00023fbf0, 0x1fb10a8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000292b00, 0xc000423770, 0x1, 0x3)
... skipping 30 lines ...
	/usr/local/go/src/net/http/client.go:396 +0x337

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
May 22 07:13:25.117: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5554 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
May 22 07:13:26.785: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
May 22 07:13:26.786: INFO: stdout: "I0522 07:13:26.678290     220 merged_client_builder.go:121] Using in-cluster configuration\nI0522 07:13:26.680801     220 merged_client_builder.go:121] Using in-cluster configuration\nI0522 07:13:26.688727     220 merged_client_builder.go:121] Using in-cluster configuration\nI0522 07:13:26.698924     220 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 9 milliseconds\nNo resources found in invalid namespace.\n"
May 22 07:13:26.786: INFO: stdout: I0522 07:13:26.678290     220 merged_client_builder.go:121] Using in-cluster configuration
... skipping 76 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should handle in-cluster config
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:636
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:31.150: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:12:53.611: INFO: >>> kubeConfig: /root/.kube/config
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":3,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:31.952: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 46 lines ...
May 22 07:13:00.424: INFO: PersistentVolumeClaim pvc-q842f found but phase is Pending instead of Bound.
May 22 07:13:02.585: INFO: PersistentVolumeClaim pvc-q842f found and phase=Bound (13.11131802s)
May 22 07:13:02.585: INFO: Waiting up to 3m0s for PersistentVolume local-kvbgv to have phase Bound
May 22 07:13:02.743: INFO: PersistentVolume local-kvbgv found and phase=Bound (157.33066ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kc5m
STEP: Creating a pod to test atomic-volume-subpath
May 22 07:13:03.216: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kc5m" in namespace "provisioning-1643" to be "Succeeded or Failed"
May 22 07:13:03.373: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Pending", Reason="", readiness=false. Elapsed: 157.271148ms
May 22 07:13:05.545: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328935898s
May 22 07:13:07.703: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486620562s
May 22 07:13:09.861: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Running", Reason="", readiness=true. Elapsed: 6.644510793s
May 22 07:13:12.022: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Running", Reason="", readiness=true. Elapsed: 8.806284731s
May 22 07:13:14.183: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Running", Reason="", readiness=true. Elapsed: 10.967074747s
May 22 07:13:16.341: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Running", Reason="", readiness=true. Elapsed: 13.12547085s
May 22 07:13:18.500: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Running", Reason="", readiness=true. Elapsed: 15.28396994s
May 22 07:13:20.658: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Running", Reason="", readiness=true. Elapsed: 17.442237721s
May 22 07:13:22.817: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Running", Reason="", readiness=true. Elapsed: 19.601003108s
May 22 07:13:24.976: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Running", Reason="", readiness=true. Elapsed: 21.760106373s
May 22 07:13:27.135: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.918745636s
STEP: Saw pod success
May 22 07:13:27.135: INFO: Pod "pod-subpath-test-preprovisionedpv-kc5m" satisfied condition "Succeeded or Failed"
May 22 07:13:27.293: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-kc5m container test-container-subpath-preprovisionedpv-kc5m: <nil>
STEP: delete the pod
May 22 07:13:27.619: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kc5m to disappear
May 22 07:13:27.776: INFO: Pod pod-subpath-test-preprovisionedpv-kc5m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kc5m
May 22 07:13:27.776: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kc5m" in namespace "provisioning-1643"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:33.302: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":28,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:13:30.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-1b85e500-f954-49b6-b07b-ceec1e549262
STEP: Creating a pod to test consume configMaps
May 22 07:13:31.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b59a176-d34e-41a3-873e-a33386653845" in namespace "configmap-9062" to be "Succeeded or Failed"
May 22 07:13:31.400: INFO: Pod "pod-configmaps-3b59a176-d34e-41a3-873e-a33386653845": Phase="Pending", Reason="", readiness=false. Elapsed: 157.22897ms
May 22 07:13:33.558: INFO: Pod "pod-configmaps-3b59a176-d34e-41a3-873e-a33386653845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.315722179s
STEP: Saw pod success
May 22 07:13:33.558: INFO: Pod "pod-configmaps-3b59a176-d34e-41a3-873e-a33386653845" satisfied condition "Succeeded or Failed"
May 22 07:13:33.716: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-configmaps-3b59a176-d34e-41a3-873e-a33386653845 container agnhost-container: <nil>
STEP: delete the pod
May 22 07:13:34.039: INFO: Waiting for pod pod-configmaps-3b59a176-d34e-41a3-873e-a33386653845 to disappear
May 22 07:13:34.196: INFO: Pod pod-configmaps-3b59a176-d34e-41a3-873e-a33386653845 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:34.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9062" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:13:27.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
May 22 07:13:28.420: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-7293462a-3a79-4dd5-81a1-cea7c8256cc7" in namespace "security-context-test-977" to be "Succeeded or Failed"
May 22 07:13:28.579: INFO: Pod "alpine-nnp-true-7293462a-3a79-4dd5-81a1-cea7c8256cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 157.966627ms
May 22 07:13:30.736: INFO: Pod "alpine-nnp-true-7293462a-3a79-4dd5-81a1-cea7c8256cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315742214s
May 22 07:13:32.894: INFO: Pod "alpine-nnp-true-7293462a-3a79-4dd5-81a1-cea7c8256cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473743209s
May 22 07:13:35.053: INFO: Pod "alpine-nnp-true-7293462a-3a79-4dd5-81a1-cea7c8256cc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.632478779s
May 22 07:13:35.053: INFO: Pod "alpine-nnp-true-7293462a-3a79-4dd5-81a1-cea7c8256cc7" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:35.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-977" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":46,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  GlusterDynamicProvisioner
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793
    should create and delete persistent volumes [fast]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":4,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:7.053 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":8,"skipped":30,"failed":0}
[BeforeEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:13:36.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:37.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2484" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":9,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:38.045: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
May 22 07:12:45.788: INFO: PersistentVolumeClaim pvc-lkxsx found but phase is Pending instead of Bound.
May 22 07:12:47.954: INFO: PersistentVolumeClaim pvc-lkxsx found and phase=Bound (15.293558066s)
May 22 07:12:47.954: INFO: Waiting up to 3m0s for PersistentVolume aws-mp96s to have phase Bound
May 22 07:12:48.115: INFO: PersistentVolume aws-mp96s found and phase=Bound (160.522509ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-mvzq
STEP: Creating a pod to test exec-volume-test
May 22 07:12:48.599: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-mvzq" in namespace "volume-1045" to be "Succeeded or Failed"
May 22 07:12:48.759: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 160.537392ms
May 22 07:12:50.921: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322260105s
May 22 07:12:53.088: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488838574s
May 22 07:12:55.250: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651032822s
May 22 07:12:57.412: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.81326863s
May 22 07:12:59.574: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.975213009s
... skipping 4 lines ...
May 22 07:13:10.388: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 21.789335286s
May 22 07:13:12.559: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 23.960032797s
May 22 07:13:14.722: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 26.122795916s
May 22 07:13:16.884: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Pending", Reason="", readiness=false. Elapsed: 28.285043689s
May 22 07:13:19.045: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.446072021s
STEP: Saw pod success
May 22 07:13:19.045: INFO: Pod "exec-volume-test-preprovisionedpv-mvzq" satisfied condition "Succeeded or Failed"
May 22 07:13:19.206: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-mvzq container exec-container-preprovisionedpv-mvzq: <nil>
STEP: delete the pod
May 22 07:13:19.548: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mvzq to disappear
May 22 07:13:19.712: INFO: Pod exec-volume-test-preprovisionedpv-mvzq no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-mvzq
May 22 07:13:19.713: INFO: Deleting pod "exec-volume-test-preprovisionedpv-mvzq" in namespace "volume-1045"
STEP: Deleting pv and pvc
May 22 07:13:19.873: INFO: Deleting PersistentVolumeClaim "pvc-lkxsx"
May 22 07:13:20.052: INFO: Deleting PersistentVolume "aws-mp96s"
May 22 07:13:20.469: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-03989a2bd0deab8b6", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03989a2bd0deab8b6 is currently attached to i-08a86d156837dd635
	status code: 400, request id: 721d953c-3fc2-461d-8d41-7d62459b19f6
May 22 07:13:26.236: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-03989a2bd0deab8b6", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03989a2bd0deab8b6 is currently attached to i-08a86d156837dd635
	status code: 400, request id: d6f03ef2-314c-4ce9-99f8-6dbad00d6808
May 22 07:13:32.012: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-03989a2bd0deab8b6", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03989a2bd0deab8b6 is currently attached to i-08a86d156837dd635
	status code: 400, request id: 621d375c-96c5-426b-b632-89b16bd6c59e
May 22 07:13:37.809: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-03989a2bd0deab8b6".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:37.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1045" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 20 lines ...
May 22 07:13:29.403: INFO: PersistentVolumeClaim pvc-85m6f found but phase is Pending instead of Bound.
May 22 07:13:31.565: INFO: PersistentVolumeClaim pvc-85m6f found and phase=Bound (10.967595857s)
May 22 07:13:31.565: INFO: Waiting up to 3m0s for PersistentVolume local-gmggj to have phase Bound
May 22 07:13:31.726: INFO: PersistentVolume local-gmggj found and phase=Bound (161.732815ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-qks9
STEP: Creating a pod to test exec-volume-test
May 22 07:13:32.208: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-qks9" in namespace "volume-7465" to be "Succeeded or Failed"
May 22 07:13:32.369: INFO: Pod "exec-volume-test-preprovisionedpv-qks9": Phase="Pending", Reason="", readiness=false. Elapsed: 160.141546ms
May 22 07:13:34.530: INFO: Pod "exec-volume-test-preprovisionedpv-qks9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.322000036s
STEP: Saw pod success
May 22 07:13:34.531: INFO: Pod "exec-volume-test-preprovisionedpv-qks9" satisfied condition "Succeeded or Failed"
May 22 07:13:34.693: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-qks9 container exec-container-preprovisionedpv-qks9: <nil>
STEP: delete the pod
May 22 07:13:35.029: INFO: Waiting for pod exec-volume-test-preprovisionedpv-qks9 to disappear
May 22 07:13:35.189: INFO: Pod exec-volume-test-preprovisionedpv-qks9 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-qks9
May 22 07:13:35.189: INFO: Deleting pod "exec-volume-test-preprovisionedpv-qks9" in namespace "volume-7465"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:38.368: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 164 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:39.809: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 41 lines ...
• [SLOW TEST:9.033 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
May 22 07:13:30.658: INFO: PersistentVolumeClaim pvc-cl7mf found but phase is Pending instead of Bound.
May 22 07:13:32.819: INFO: PersistentVolumeClaim pvc-cl7mf found and phase=Bound (4.491998479s)
May 22 07:13:32.819: INFO: Waiting up to 3m0s for PersistentVolume local-f776h to have phase Bound
May 22 07:13:32.979: INFO: PersistentVolume local-f776h found and phase=Bound (160.498588ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zdd5
STEP: Creating a pod to test subpath
May 22 07:13:33.463: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zdd5" in namespace "provisioning-8502" to be "Succeeded or Failed"
May 22 07:13:33.623: INFO: Pod "pod-subpath-test-preprovisionedpv-zdd5": Phase="Pending", Reason="", readiness=false. Elapsed: 160.108549ms
May 22 07:13:35.785: INFO: Pod "pod-subpath-test-preprovisionedpv-zdd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32138453s
May 22 07:13:37.945: INFO: Pod "pod-subpath-test-preprovisionedpv-zdd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.481837932s
STEP: Saw pod success
May 22 07:13:37.945: INFO: Pod "pod-subpath-test-preprovisionedpv-zdd5" satisfied condition "Succeeded or Failed"
May 22 07:13:38.105: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-zdd5 container test-container-volume-preprovisionedpv-zdd5: <nil>
STEP: delete the pod
May 22 07:13:38.445: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zdd5 to disappear
May 22 07:13:38.613: INFO: Pod pod-subpath-test-preprovisionedpv-zdd5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zdd5
May 22 07:13:38.613: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zdd5" in namespace "provisioning-8502"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":46,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:40.880: INFO: Only supported for providers [azure] (not aws)
... skipping 23 lines ...
May 22 07:13:37.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
May 22 07:13:38.177: INFO: Waiting up to 5m0s for pod "security-context-4870bfa1-7907-4530-9b13-53a9fd5bb186" in namespace "security-context-5958" to be "Succeeded or Failed"
May 22 07:13:38.334: INFO: Pod "security-context-4870bfa1-7907-4530-9b13-53a9fd5bb186": Phase="Pending", Reason="", readiness=false. Elapsed: 157.202569ms
May 22 07:13:40.493: INFO: Pod "security-context-4870bfa1-7907-4530-9b13-53a9fd5bb186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316044171s
STEP: Saw pod success
May 22 07:13:40.493: INFO: Pod "security-context-4870bfa1-7907-4530-9b13-53a9fd5bb186" satisfied condition "Succeeded or Failed"
May 22 07:13:40.651: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod security-context-4870bfa1-7907-4530-9b13-53a9fd5bb186 container test-container: <nil>
STEP: delete the pod
May 22 07:13:40.976: INFO: Waiting for pod security-context-4870bfa1-7907-4530-9b13-53a9fd5bb186 to disappear
May 22 07:13:41.153: INFO: Pod security-context-4870bfa1-7907-4530-9b13-53a9fd5bb186 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:41.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-5958" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 131 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":4,"skipped":36,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:42.615: INFO: Only supported for providers [gce gke] (not aws)
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:42.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-989" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:42.663: INFO: Only supported for providers [gce gke] (not aws)
... skipping 71 lines ...
• [SLOW TEST:5.799 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:92
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":10,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:43.877: INFO: Driver "local" does not provide raw block - skipping
... skipping 64 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":3,"skipped":31,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:45.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4614" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":7,"skipped":51,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:46.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-5841" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":4,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:47.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1097" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":5,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:47.770: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 90 lines ...
May 22 07:13:05.834: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-m58bd] to have phase Bound
May 22 07:13:06.000: INFO: PersistentVolumeClaim pvc-m58bd found and phase=Bound (165.746813ms)
STEP: Deleting the previously created pod
May 22 07:13:16.819: INFO: Deleting pod "pvc-volume-tester-h2pt7" in namespace "csi-mock-volumes-3545"
May 22 07:13:16.983: INFO: Wait up to 5m0s for pod "pvc-volume-tester-h2pt7" to be fully deleted
STEP: Checking CSI driver logs
May 22 07:13:23.476: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/8445312e-34e8-4835-903b-be70c812cadc/volumes/kubernetes.io~csi/pvc-fafa7475-054d-42a1-8622-ef7114a0fbc1/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-h2pt7
May 22 07:13:23.476: INFO: Deleting pod "pvc-volume-tester-h2pt7" in namespace "csi-mock-volumes-3545"
STEP: Deleting claim pvc-m58bd
May 22 07:13:23.966: INFO: Waiting up to 2m0s for PersistentVolume pvc-fafa7475-054d-42a1-8622-ef7114a0fbc1 to get deleted
May 22 07:13:24.131: INFO: PersistentVolume pvc-fafa7475-054d-42a1-8622-ef7114a0fbc1 was removed
STEP: Deleting storageclass csi-mock-volumes-3545-sc6vrq9
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:45.253 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1055
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":4,"skipped":34,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:49.790: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 50 lines ...
• [SLOW TEST:80.731 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:244
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:50.564: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
May 22 07:13:47.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
May 22 07:13:48.435: INFO: Waiting up to 5m0s for pod "client-containers-0964a683-32e5-43bf-8eaa-a69d08ff9bc3" in namespace "containers-7607" to be "Succeeded or Failed"
May 22 07:13:48.593: INFO: Pod "client-containers-0964a683-32e5-43bf-8eaa-a69d08ff9bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 157.43678ms
May 22 07:13:50.755: INFO: Pod "client-containers-0964a683-32e5-43bf-8eaa-a69d08ff9bc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.319412344s
STEP: Saw pod success
May 22 07:13:50.755: INFO: Pod "client-containers-0964a683-32e5-43bf-8eaa-a69d08ff9bc3" satisfied condition "Succeeded or Failed"
May 22 07:13:50.913: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod client-containers-0964a683-32e5-43bf-8eaa-a69d08ff9bc3 container agnhost-container: <nil>
STEP: delete the pod
May 22 07:13:51.238: INFO: Waiting for pod client-containers-0964a683-32e5-43bf-8eaa-a69d08ff9bc3 to disappear
May 22 07:13:51.398: INFO: Pod client-containers-0964a683-32e5-43bf-8eaa-a69d08ff9bc3 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:51.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7607" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:51.738: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 139 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
May 22 07:13:49.092: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-7445070d-e8c5-47c6-81c1-5f090ff86c8d" in namespace "security-context-test-4290" to be "Succeeded or Failed"
May 22 07:13:49.256: INFO: Pod "busybox-privileged-false-7445070d-e8c5-47c6-81c1-5f090ff86c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 163.397905ms
May 22 07:13:51.422: INFO: Pod "busybox-privileged-false-7445070d-e8c5-47c6-81c1-5f090ff86c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329372296s
May 22 07:13:53.585: INFO: Pod "busybox-privileged-false-7445070d-e8c5-47c6-81c1-5f090ff86c8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.492773024s
May 22 07:13:53.585: INFO: Pod "busybox-privileged-false-7445070d-e8c5-47c6-81c1-5f090ff86c8d" satisfied condition "Succeeded or Failed"
May 22 07:13:53.751: INFO: Got logs for pod "busybox-privileged-false-7445070d-e8c5-47c6-81c1-5f090ff86c8d": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:13:53.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4290" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:54.112: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 77 lines ...
• [SLOW TEST:89.895 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:13:59.653: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 44 lines ...
May 22 07:13:55.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 22 07:13:56.306: INFO: Waiting up to 5m0s for pod "pod-59581bcb-498d-48df-a4dd-25b6199ee0e4" in namespace "emptydir-411" to be "Succeeded or Failed"
May 22 07:13:56.469: INFO: Pod "pod-59581bcb-498d-48df-a4dd-25b6199ee0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 162.893363ms
May 22 07:13:58.633: INFO: Pod "pod-59581bcb-498d-48df-a4dd-25b6199ee0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326819065s
May 22 07:14:00.808: INFO: Pod "pod-59581bcb-498d-48df-a4dd-25b6199ee0e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.501663744s
STEP: Saw pod success
May 22 07:14:00.808: INFO: Pod "pod-59581bcb-498d-48df-a4dd-25b6199ee0e4" satisfied condition "Succeeded or Failed"
May 22 07:14:00.972: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-59581bcb-498d-48df-a4dd-25b6199ee0e4 container test-container: <nil>
STEP: delete the pod
May 22 07:14:01.348: INFO: Waiting for pod pod-59581bcb-498d-48df-a4dd-25b6199ee0e4 to disappear
May 22 07:14:01.516: INFO: Pod pod-59581bcb-498d-48df-a4dd-25b6199ee0e4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.522 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":5,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:01.880: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 68 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
May 22 07:13:48.765: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6" in namespace "security-context-test-8712" to be "Succeeded or Failed"
May 22 07:13:48.926: INFO: Pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6": Phase="Pending", Reason="", readiness=false. Elapsed: 161.505521ms
May 22 07:13:51.087: INFO: Pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322405947s
May 22 07:13:53.251: INFO: Pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485891777s
May 22 07:13:55.411: INFO: Pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.646490956s
May 22 07:13:57.572: INFO: Pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.807596754s
May 22 07:13:59.737: INFO: Pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.972614506s
May 22 07:14:01.899: INFO: Pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.134279131s
May 22 07:14:01.899: INFO: Pod "alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:14:02.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8712" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":64,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] AppArmor
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
• [SLOW TEST:30.417 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:03.762: INFO: Only supported for providers [vsphere] (not aws)
... skipping 21 lines ...
May 22 07:13:59.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
May 22 07:14:00.625: INFO: Waiting up to 5m0s for pod "security-context-cd753a07-a062-4495-8a79-f38e1cb824cd" in namespace "security-context-2562" to be "Succeeded or Failed"
May 22 07:14:00.781: INFO: Pod "security-context-cd753a07-a062-4495-8a79-f38e1cb824cd": Phase="Pending", Reason="", readiness=false. Elapsed: 156.663895ms
May 22 07:14:02.946: INFO: Pod "security-context-cd753a07-a062-4495-8a79-f38e1cb824cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321427499s
STEP: Saw pod success
May 22 07:14:02.946: INFO: Pod "security-context-cd753a07-a062-4495-8a79-f38e1cb824cd" satisfied condition "Succeeded or Failed"
May 22 07:14:03.103: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod security-context-cd753a07-a062-4495-8a79-f38e1cb824cd container test-container: <nil>
STEP: delete the pod
May 22 07:14:03.438: INFO: Waiting for pod security-context-cd753a07-a062-4495-8a79-f38e1cb824cd to disappear
May 22 07:14:03.595: INFO: Pod security-context-cd753a07-a062-4495-8a79-f38e1cb824cd no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:14:03.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-2562" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:03.920: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:05.686: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 112 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:05.922: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:06.179: INFO: Driver nfs doesn't support Block -- skipping
... skipping 22 lines ...
STEP: Creating a kubernetes client
May 22 07:14:01.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
May 22 07:14:02.742: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:14:06.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-659" for this suite.


• [SLOW TEST:5.168 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 9 lines ...
May 22 07:13:36.806: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-3584kbmlf
STEP: creating a claim
May 22 07:13:36.961: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
May 22 07:13:37.273: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
May 22 07:13:37.591: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:39.901: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:41.901: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:43.904: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:45.994: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:47.901: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:49.908: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:51.900: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:53.905: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:55.906: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:57.903: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:13:59.901: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:14:01.907: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:14:03.901: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:14:05.901: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:14:07.901: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-3584kbmlf",
  	... // 2 identical fields
  }

May 22 07:14:08.211: INFO: Error updating pvc aws5kkqq: PersistentVolumeClaim "aws5kkqq" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":5,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
May 22 07:13:59.039: INFO: PersistentVolumeClaim pvc-vv42v found but phase is Pending instead of Bound.
May 22 07:14:01.198: INFO: PersistentVolumeClaim pvc-vv42v found and phase=Bound (2.315975661s)
May 22 07:14:01.198: INFO: Waiting up to 3m0s for PersistentVolume local-jcb9l to have phase Bound
May 22 07:14:01.356: INFO: PersistentVolume local-jcb9l found and phase=Bound (158.392636ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gn77
STEP: Creating a pod to test subpath
May 22 07:14:01.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gn77" in namespace "provisioning-2769" to be "Succeeded or Failed"
May 22 07:14:01.991: INFO: Pod "pod-subpath-test-preprovisionedpv-gn77": Phase="Pending", Reason="", readiness=false. Elapsed: 157.809753ms
May 22 07:14:04.149: INFO: Pod "pod-subpath-test-preprovisionedpv-gn77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315906824s
May 22 07:14:06.308: INFO: Pod "pod-subpath-test-preprovisionedpv-gn77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.474647232s
STEP: Saw pod success
May 22 07:14:06.308: INFO: Pod "pod-subpath-test-preprovisionedpv-gn77" satisfied condition "Succeeded or Failed"
May 22 07:14:06.467: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-gn77 container test-container-subpath-preprovisionedpv-gn77: <nil>
STEP: delete the pod
May 22 07:14:06.796: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gn77 to disappear
May 22 07:14:06.954: INFO: Pod pod-subpath-test-preprovisionedpv-gn77 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gn77
May 22 07:14:06.954: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gn77" in namespace "provisioning-2769"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":79,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:11.526: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 282 lines ...
• [SLOW TEST:10.138 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":7,"skipped":74,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
May 22 07:14:12.807: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 22 07:14:12.807: INFO: stdout: "scheduler controller-manager etcd-0 etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
May 22 07:14:12.807: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1664 get componentstatuses scheduler'
May 22 07:14:13.394: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 22 07:14:13.394: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
May 22 07:14:13.394: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1664 get componentstatuses controller-manager'
May 22 07:14:13.987: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 22 07:14:13.987: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-0
May 22 07:14:13.987: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1664 get componentstatuses etcd-0'
May 22 07:14:14.555: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 22 07:14:14.555: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-1
May 22 07:14:14.555: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1664 get componentstatuses etcd-1'
May 22 07:14:15.125: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 22 07:14:15.125: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:14:15.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1664" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":10,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:15.453: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 220 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":3,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:13:16.142: INFO: >>> kubeConfig: /root/.kube/config
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:17.786: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-2cf5eb2e-7735-45f6-a9fc-c16ce897f16a
STEP: Creating a pod to test consume configMaps
May 22 07:14:14.876: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ebd2c9b7-21e9-4eac-9e2d-a8a89a8d2a4b" in namespace "projected-8937" to be "Succeeded or Failed"
May 22 07:14:15.036: INFO: Pod "pod-projected-configmaps-ebd2c9b7-21e9-4eac-9e2d-a8a89a8d2a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 160.268134ms
May 22 07:14:17.197: INFO: Pod "pod-projected-configmaps-ebd2c9b7-21e9-4eac-9e2d-a8a89a8d2a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321180553s
STEP: Saw pod success
May 22 07:14:17.197: INFO: Pod "pod-projected-configmaps-ebd2c9b7-21e9-4eac-9e2d-a8a89a8d2a4b" satisfied condition "Succeeded or Failed"
May 22 07:14:17.365: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-projected-configmaps-ebd2c9b7-21e9-4eac-9e2d-a8a89a8d2a4b container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 22 07:14:17.783: INFO: Waiting for pod pod-projected-configmaps-ebd2c9b7-21e9-4eac-9e2d-a8a89a8d2a4b to disappear
May 22 07:14:17.943: INFO: Pod pod-projected-configmaps-ebd2c9b7-21e9-4eac-9e2d-a8a89a8d2a4b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:14:17.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8937" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":80,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 57 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-f036a32c-bd3e-40bc-b795-4b6b40ae6322
STEP: Creating a pod to test consume secrets
May 22 07:14:12.821: INFO: Waiting up to 5m0s for pod "pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b" in namespace "secrets-250" to be "Succeeded or Failed"
May 22 07:14:12.982: INFO: Pod "pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b": Phase="Pending", Reason="", readiness=false. Elapsed: 161.458049ms
May 22 07:14:15.146: INFO: Pod "pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324966748s
May 22 07:14:17.307: INFO: Pod "pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486401836s
May 22 07:14:19.475: INFO: Pod "pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.654477095s
STEP: Saw pod success
May 22 07:14:19.475: INFO: Pod "pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b" satisfied condition "Succeeded or Failed"
May 22 07:14:19.636: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b container secret-volume-test: <nil>
STEP: delete the pod
May 22 07:14:19.963: INFO: Waiting for pod pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b to disappear
May 22 07:14:20.124: INFO: Pod pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.805 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:20.476: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
• [SLOW TEST:38.749 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":11,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:22.688: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 69 lines ...
May 22 07:14:15.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
May 22 07:14:16.550: INFO: Waiting up to 5m0s for pod "pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915" in namespace "emptydir-4254" to be "Succeeded or Failed"
May 22 07:14:16.709: INFO: Pod "pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915": Phase="Pending", Reason="", readiness=false. Elapsed: 158.429058ms
May 22 07:14:18.868: INFO: Pod "pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31705279s
May 22 07:14:21.027: INFO: Pod "pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476117755s
May 22 07:14:23.184: INFO: Pod "pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633930112s
May 22 07:14:25.343: INFO: Pod "pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.7925884s
STEP: Saw pod success
May 22 07:14:25.343: INFO: Pod "pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915" satisfied condition "Succeeded or Failed"
May 22 07:14:25.501: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915 container test-container: <nil>
STEP: delete the pod
May 22 07:14:25.821: INFO: Waiting for pod pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915 to disappear
May 22 07:14:25.979: INFO: Pod pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.723 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
May 22 07:14:14.951: INFO: PersistentVolumeClaim pvc-54fdz found but phase is Pending instead of Bound.
May 22 07:14:17.109: INFO: PersistentVolumeClaim pvc-54fdz found and phase=Bound (4.47896184s)
May 22 07:14:17.109: INFO: Waiting up to 3m0s for PersistentVolume local-lj47g to have phase Bound
May 22 07:14:17.267: INFO: PersistentVolume local-lj47g found and phase=Bound (157.568111ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bzkg
STEP: Creating a pod to test subpath
May 22 07:14:17.782: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bzkg" in namespace "provisioning-7932" to be "Succeeded or Failed"
May 22 07:14:17.940: INFO: Pod "pod-subpath-test-preprovisionedpv-bzkg": Phase="Pending", Reason="", readiness=false. Elapsed: 157.970239ms
May 22 07:14:20.101: INFO: Pod "pod-subpath-test-preprovisionedpv-bzkg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319641784s
May 22 07:14:22.259: INFO: Pod "pod-subpath-test-preprovisionedpv-bzkg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477607479s
May 22 07:14:24.418: INFO: Pod "pod-subpath-test-preprovisionedpv-bzkg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636184719s
May 22 07:14:26.577: INFO: Pod "pod-subpath-test-preprovisionedpv-bzkg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.795277288s
STEP: Saw pod success
May 22 07:14:26.577: INFO: Pod "pod-subpath-test-preprovisionedpv-bzkg" satisfied condition "Succeeded or Failed"
May 22 07:14:26.735: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-bzkg container test-container-subpath-preprovisionedpv-bzkg: <nil>
STEP: delete the pod
May 22 07:14:27.066: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bzkg to disappear
May 22 07:14:27.225: INFO: Pod pod-subpath-test-preprovisionedpv-bzkg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bzkg
May 22 07:14:27.225: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bzkg" in namespace "provisioning-7932"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:29.394: INFO: Only supported for providers [gce gke] (not aws)
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:14:20.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad" in namespace "projected-7106" to be "Succeeded or Failed"
May 22 07:14:20.615: INFO: Pod "downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad": Phase="Pending", Reason="", readiness=false. Elapsed: 160.057412ms
May 22 07:14:22.776: INFO: Pod "downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321057041s
May 22 07:14:24.936: INFO: Pod "downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481742209s
May 22 07:14:27.100: INFO: Pod "downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.645712011s
May 22 07:14:29.261: INFO: Pod "downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.806804647s
STEP: Saw pod success
May 22 07:14:29.262: INFO: Pod "downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad" satisfied condition "Succeeded or Failed"
May 22 07:14:29.423: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad container client-container: <nil>
STEP: delete the pod
May 22 07:14:29.750: INFO: Waiting for pod downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad to disappear
May 22 07:14:29.910: INFO: Pod downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.754 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:30.250: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
May 22 07:14:15.300: INFO: PersistentVolumeClaim pvc-z55tm found but phase is Pending instead of Bound.
May 22 07:14:17.458: INFO: PersistentVolumeClaim pvc-z55tm found and phase=Bound (2.315190309s)
May 22 07:14:17.458: INFO: Waiting up to 3m0s for PersistentVolume local-q2cm7 to have phase Bound
May 22 07:14:17.622: INFO: PersistentVolume local-q2cm7 found and phase=Bound (164.626648ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-cs67
STEP: Creating a pod to test exec-volume-test
May 22 07:14:18.103: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-cs67" in namespace "volume-7530" to be "Succeeded or Failed"
May 22 07:14:18.260: INFO: Pod "exec-volume-test-preprovisionedpv-cs67": Phase="Pending", Reason="", readiness=false. Elapsed: 156.710314ms
May 22 07:14:20.417: INFO: Pod "exec-volume-test-preprovisionedpv-cs67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313833528s
May 22 07:14:22.574: INFO: Pod "exec-volume-test-preprovisionedpv-cs67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470914367s
May 22 07:14:24.732: INFO: Pod "exec-volume-test-preprovisionedpv-cs67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629139157s
May 22 07:14:26.893: INFO: Pod "exec-volume-test-preprovisionedpv-cs67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.789638283s
STEP: Saw pod success
May 22 07:14:26.893: INFO: Pod "exec-volume-test-preprovisionedpv-cs67" satisfied condition "Succeeded or Failed"
May 22 07:14:27.053: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-cs67 container exec-container-preprovisionedpv-cs67: <nil>
STEP: delete the pod
May 22 07:14:27.374: INFO: Waiting for pod exec-volume-test-preprovisionedpv-cs67 to disappear
May 22 07:14:27.531: INFO: Pod exec-volume-test-preprovisionedpv-cs67 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-cs67
May 22 07:14:27.531: INFO: Deleting pod "exec-volume-test-preprovisionedpv-cs67" in namespace "volume-7530"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:32.723: INFO: Only supported for providers [openstack] (not aws)
... skipping 102 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":62,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:15.387 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:35.896: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:35.941: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 190 lines ...
May 22 07:13:31.877: INFO: PersistentVolumeClaim csi-hostpathlpdrq found but phase is Pending instead of Bound.
May 22 07:13:34.037: INFO: PersistentVolumeClaim csi-hostpathlpdrq found but phase is Pending instead of Bound.
May 22 07:13:36.196: INFO: PersistentVolumeClaim csi-hostpathlpdrq found but phase is Pending instead of Bound.
May 22 07:13:38.354: INFO: PersistentVolumeClaim csi-hostpathlpdrq found and phase=Bound (26.078749077s)
STEP: Creating pod pod-subpath-test-dynamicpv-7m9t
STEP: Creating a pod to test subpath
May 22 07:13:38.827: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7m9t" in namespace "provisioning-7283" to be "Succeeded or Failed"
May 22 07:13:38.984: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 157.023349ms
May 22 07:13:41.152: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324849732s
May 22 07:13:43.312: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48513683s
May 22 07:13:45.471: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.643980043s
May 22 07:13:47.631: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803264878s
May 22 07:13:49.788: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.960870663s
May 22 07:13:51.946: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 13.119186803s
May 22 07:13:54.105: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 15.277707948s
May 22 07:13:56.263: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 17.435385134s
May 22 07:13:58.420: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Pending", Reason="", readiness=false. Elapsed: 19.592721772s
May 22 07:14:00.578: INFO: Pod "pod-subpath-test-dynamicpv-7m9t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.750377604s
STEP: Saw pod success
May 22 07:14:00.578: INFO: Pod "pod-subpath-test-dynamicpv-7m9t" satisfied condition "Succeeded or Failed"
May 22 07:14:00.735: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-7m9t container test-container-volume-dynamicpv-7m9t: <nil>
STEP: delete the pod
May 22 07:14:01.069: INFO: Waiting for pod pod-subpath-test-dynamicpv-7m9t to disappear
May 22 07:14:01.226: INFO: Pod pod-subpath-test-dynamicpv-7m9t no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-7m9t
May 22 07:14:01.226: INFO: Deleting pod "pod-subpath-test-dynamicpv-7m9t" in namespace "provisioning-7283"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:36.316: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 70 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":12,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:40.215: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 99 lines ...
• [SLOW TEST:23.083 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":5,"skipped":33,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:40.909: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 153 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":36,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:41.457: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:14:13.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:14:42.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-3463" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 71 lines ...
May 22 07:14:40.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 22 07:14:43.801: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:14:44.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8172" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:44.452: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 92 lines ...
May 22 07:13:53.411: INFO: PersistentVolumeClaim pvc-zrn6l found and phase=Bound (156.977361ms)
STEP: Deleting the previously created pod
May 22 07:14:05.199: INFO: Deleting pod "pvc-volume-tester-fkh4d" in namespace "csi-mock-volumes-1734"
May 22 07:14:05.357: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fkh4d" to be fully deleted
STEP: Checking CSI driver logs
May 22 07:14:15.860: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IkI3SWlMcGFfZHhUdlF4U1lPUE5QVE1MakhTODhSZ3BRZ2RzNUpuNzJ5Mm8ifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MjE2NjgyNDEsImlhdCI6MTYyMTY2NzY0MSwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLTZmZjU5MzBhMWYtY2I3MGMudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtMTczNCIsInBvZCI6eyJuYW1lIjoicHZjLXZvbHVtZS10ZXN0ZXItZmtoNGQiLCJ1aWQiOiIxNjIzOTIwMC05ZThlLTQyZjAtOGU4Ni1hOTMxMjdkMmMwMWMifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiJkMjNiNGEwMS0wMzQ4LTQwNjEtOWEwNC1jN2E5ODZjNDllMjUifX0sIm5iZiI6MTYyMTY2NzY0MSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNzaS1tb2NrLXZvbHVtZXMtMTczNDpkZWZhdWx0In0.lYiTgfNmJAqmSAEfbOejM3ts2OhzIg2ol8d7Tcref9VMvOfuuRXMvgaoIXVuak4mrmtmOSvOA-MfALvwQHqsnwA3pkRLmaG7lqMoEIxiyOXUevBX6stb1WBVNh8KCTScVGVJoDW8XPRqxgVdaOXcoomT_v6NumPBD-g83sv-jorhF4AiJLfDnEBcgme5oGMAoNYZ1rBhwfJyBYz4qINLqxQc9YAJKEJTlW_Pd9Ni6UICEE4Af-Mr_mgPWb1_BxQsQxrzWsU3Y4_Yp46BEJT9gQZF7pz2vUj6JBA-mLjPPdMrXX0s_DaOdKkNQfTU7Gs5btD64qkC0714mzam44qQvA","expirationTimestamp":"2021-05-22T07:24:01Z"}}
May 22 07:14:15.860: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/16239200-9e8e-42f0-8e86-a93127d2c01c/volumes/kubernetes.io~csi/pvc-4e51cd19-0f70-4123-9b46-6a7ceac69da1/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-fkh4d
May 22 07:14:15.860: INFO: Deleting pod "pvc-volume-tester-fkh4d" in namespace "csi-mock-volumes-1734"
STEP: Deleting claim pvc-zrn6l
May 22 07:14:16.330: INFO: Waiting up to 2m0s for PersistentVolume pvc-4e51cd19-0f70-4123-9b46-6a7ceac69da1 to get deleted
May 22 07:14:16.489: INFO: PersistentVolume pvc-4e51cd19-0f70-4123-9b46-6a7ceac69da1 was removed
STEP: Deleting storageclass csi-mock-volumes-1734-sc4srt5
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374
    token should be plumbed down when csiServiceAccountTokenEnabled=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":4,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:44.568: INFO: Only supported for providers [azure] (not aws)
... skipping 70 lines ...
      Block volumes do not support mount options - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:184
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":10,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:14:44.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
STEP: Destroying namespace "services-1978" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":5,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:14:41.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
May 22 07:14:42.405: INFO: Waiting up to 5m0s for pod "security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7" in namespace "security-context-1998" to be "Succeeded or Failed"
May 22 07:14:42.561: INFO: Pod "security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7": Phase="Pending", Reason="", readiness=false. Elapsed: 155.644444ms
May 22 07:14:44.718: INFO: Pod "security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312633743s
May 22 07:14:46.875: INFO: Pod "security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.469609473s
STEP: Saw pod success
May 22 07:14:46.875: INFO: Pod "security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7" satisfied condition "Succeeded or Failed"
May 22 07:14:47.031: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7 container test-container: <nil>
STEP: delete the pod
May 22 07:14:47.349: INFO: Waiting for pod security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7 to disappear
May 22 07:14:47.506: INFO: Pod security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.360 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":6,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:47.835: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
• [SLOW TEST:8.647 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:49.205: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 309 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:52.018: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
May 22 07:14:44.134: INFO: PersistentVolumeClaim pvc-52svl found but phase is Pending instead of Bound.
May 22 07:14:46.292: INFO: PersistentVolumeClaim pvc-52svl found and phase=Bound (13.123884606s)
May 22 07:14:46.292: INFO: Waiting up to 3m0s for PersistentVolume local-kxcz6 to have phase Bound
May 22 07:14:46.449: INFO: PersistentVolume local-kxcz6 found and phase=Bound (157.207101ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sh6c
STEP: Creating a pod to test subpath
May 22 07:14:46.924: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sh6c" in namespace "provisioning-7115" to be "Succeeded or Failed"
May 22 07:14:47.082: INFO: Pod "pod-subpath-test-preprovisionedpv-sh6c": Phase="Pending", Reason="", readiness=false. Elapsed: 158.59415ms
May 22 07:14:49.242: INFO: Pod "pod-subpath-test-preprovisionedpv-sh6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318626541s
May 22 07:14:51.400: INFO: Pod "pod-subpath-test-preprovisionedpv-sh6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.476244947s
STEP: Saw pod success
May 22 07:14:51.400: INFO: Pod "pod-subpath-test-preprovisionedpv-sh6c" satisfied condition "Succeeded or Failed"
May 22 07:14:51.558: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-sh6c container test-container-subpath-preprovisionedpv-sh6c: <nil>
STEP: delete the pod
May 22 07:14:51.880: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sh6c to disappear
May 22 07:14:52.038: INFO: Pod pod-subpath-test-preprovisionedpv-sh6c no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sh6c
May 22 07:14:52.038: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sh6c" in namespace "provisioning-7115"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":12,"skipped":108,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:54.270: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 115 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903
    should check if kubectl can dry-run update Pods [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":7,"skipped":45,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:54.490: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 33 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 59 lines ...
May 22 07:13:51.783: INFO: PersistentVolumeClaim csi-hostpath6bp8b found but phase is Pending instead of Bound.
May 22 07:13:53.946: INFO: PersistentVolumeClaim csi-hostpath6bp8b found but phase is Pending instead of Bound.
May 22 07:13:56.105: INFO: PersistentVolumeClaim csi-hostpath6bp8b found but phase is Pending instead of Bound.
May 22 07:13:58.264: INFO: PersistentVolumeClaim csi-hostpath6bp8b found and phase=Bound (15.282591223s)
STEP: Creating pod pod-subpath-test-dynamicpv-xtj2
STEP: Creating a pod to test atomic-volume-subpath
May 22 07:13:58.737: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xtj2" in namespace "provisioning-8932" to be "Succeeded or Failed"
May 22 07:13:58.907: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Pending", Reason="", readiness=false. Elapsed: 170.106231ms
May 22 07:14:01.070: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332706541s
May 22 07:14:03.227: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490421656s
May 22 07:14:05.386: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649063199s
May 22 07:14:07.547: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.809449682s
May 22 07:14:09.704: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Running", Reason="", readiness=true. Elapsed: 10.967327401s
... skipping 4 lines ...
May 22 07:14:20.519: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Running", Reason="", readiness=true. Elapsed: 21.781451828s
May 22 07:14:22.677: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Running", Reason="", readiness=true. Elapsed: 23.939709104s
May 22 07:14:24.835: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Running", Reason="", readiness=true. Elapsed: 26.097482234s
May 22 07:14:26.994: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Running", Reason="", readiness=true. Elapsed: 28.256937984s
May 22 07:14:29.152: INFO: Pod "pod-subpath-test-dynamicpv-xtj2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.4150593s
STEP: Saw pod success
May 22 07:14:29.152: INFO: Pod "pod-subpath-test-dynamicpv-xtj2" satisfied condition "Succeeded or Failed"
May 22 07:14:29.310: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-xtj2 container test-container-subpath-dynamicpv-xtj2: <nil>
STEP: delete the pod
May 22 07:14:29.633: INFO: Waiting for pod pod-subpath-test-dynamicpv-xtj2 to disappear
May 22 07:14:29.790: INFO: Pod pod-subpath-test-dynamicpv-xtj2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xtj2
May 22 07:14:29.790: INFO: Deleting pod "pod-subpath-test-dynamicpv-xtj2" in namespace "provisioning-8932"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":51,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:54.698: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 83 lines ...
May 22 07:14:42.457: INFO: Waiting for pod aws-client to disappear
May 22 07:14:42.613: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
May 22 07:14:42.613: INFO: Deleting PersistentVolumeClaim "pvc-pkn5n"
May 22 07:14:42.778: INFO: Deleting PersistentVolume "aws-28cdv"
May 22 07:14:43.746: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-01dc2a99cda21f7b2", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01dc2a99cda21f7b2 is currently attached to i-094127254d58b1025
	status code: 400, request id: 9e607c47-d4b8-4c6e-a3b9-dd188a2e4cd5
May 22 07:14:49.563: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-01dc2a99cda21f7b2", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01dc2a99cda21f7b2 is currently attached to i-094127254d58b1025
	status code: 400, request id: e0774c23-0e47-459e-9dd4-6d9e6c12211e
May 22 07:14:55.328: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-01dc2a99cda21f7b2".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:14:55.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9216" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":4,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:55.663: INFO: Only supported for providers [azure] (not aws)
... skipping 150 lines ...
• [SLOW TEST:20.468 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:14:56.812: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-338b3b88-a4ff-4301-9490-93756c8549d8
STEP: Creating a pod to test consume configMaps
May 22 07:14:50.421: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45" in namespace "projected-5081" to be "Succeeded or Failed"
May 22 07:14:50.586: INFO: Pod "pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45": Phase="Pending", Reason="", readiness=false. Elapsed: 165.062767ms
May 22 07:14:52.750: INFO: Pod "pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328774781s
May 22 07:14:54.913: INFO: Pod "pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492091981s
May 22 07:14:57.078: INFO: Pod "pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.656618464s
STEP: Saw pod success
May 22 07:14:57.078: INFO: Pod "pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45" satisfied condition "Succeeded or Failed"
May 22 07:14:57.241: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45 container agnhost-container: <nil>
STEP: delete the pod
May 22 07:14:57.580: INFO: Waiting for pod pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45 to disappear
May 22 07:14:57.743: INFO: Pod pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.802 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
May 22 07:14:45.481: INFO: PersistentVolumeClaim pvc-2rlqp found but phase is Pending instead of Bound.
May 22 07:14:47.641: INFO: PersistentVolumeClaim pvc-2rlqp found and phase=Bound (8.795582911s)
May 22 07:14:47.641: INFO: Waiting up to 3m0s for PersistentVolume local-nnwcf to have phase Bound
May 22 07:14:47.798: INFO: PersistentVolume local-nnwcf found and phase=Bound (157.076254ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4fjx
STEP: Creating a pod to test subpath
May 22 07:14:48.279: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4fjx" in namespace "provisioning-8898" to be "Succeeded or Failed"
May 22 07:14:48.436: INFO: Pod "pod-subpath-test-preprovisionedpv-4fjx": Phase="Pending", Reason="", readiness=false. Elapsed: 157.460073ms
May 22 07:14:50.594: INFO: Pod "pod-subpath-test-preprovisionedpv-4fjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314864384s
May 22 07:14:52.751: INFO: Pod "pod-subpath-test-preprovisionedpv-4fjx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472124474s
May 22 07:14:54.908: INFO: Pod "pod-subpath-test-preprovisionedpv-4fjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629737284s
STEP: Saw pod success
May 22 07:14:54.909: INFO: Pod "pod-subpath-test-preprovisionedpv-4fjx" satisfied condition "Succeeded or Failed"
May 22 07:14:55.065: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-4fjx container test-container-volume-preprovisionedpv-4fjx: <nil>
STEP: delete the pod
May 22 07:14:55.392: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4fjx to disappear
May 22 07:14:55.552: INFO: Pod pod-subpath-test-preprovisionedpv-4fjx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4fjx
May 22 07:14:55.552: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4fjx" in namespace "provisioning-8898"
... skipping 49 lines ...
• [SLOW TEST:8.919 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:14:56.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-59039329-df5e-48ce-a2fd-6973fb66858c
STEP: Creating a pod to test consume secrets
May 22 07:14:57.926: INFO: Waiting up to 5m0s for pod "pod-secrets-481d3c1b-8fb3-4fe3-8483-316d093fa085" in namespace "secrets-5440" to be "Succeeded or Failed"
May 22 07:14:58.083: INFO: Pod "pod-secrets-481d3c1b-8fb3-4fe3-8483-316d093fa085": Phase="Pending", Reason="", readiness=false. Elapsed: 156.947201ms
May 22 07:15:00.241: INFO: Pod "pod-secrets-481d3c1b-8fb3-4fe3-8483-316d093fa085": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.314801815s
STEP: Saw pod success
May 22 07:15:00.241: INFO: Pod "pod-secrets-481d3c1b-8fb3-4fe3-8483-316d093fa085" satisfied condition "Succeeded or Failed"
May 22 07:15:00.398: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-secrets-481d3c1b-8fb3-4fe3-8483-316d093fa085 container secret-volume-test: <nil>
STEP: delete the pod
May 22 07:15:00.734: INFO: Waiting for pod pod-secrets-481d3c1b-8fb3-4fe3-8483-316d093fa085 to disappear
May 22 07:15:00.895: INFO: Pod pod-secrets-481d3c1b-8fb3-4fe3-8483-316d093fa085 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:15:00.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5440" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:01.253: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 45 lines ...
• [SLOW TEST:7.150 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:02.845: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:14:59.092: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173" in namespace "downward-api-9040" to be "Succeeded or Failed"
May 22 07:14:59.263: INFO: Pod "downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173": Phase="Pending", Reason="", readiness=false. Elapsed: 170.56444ms
May 22 07:15:01.428: INFO: Pod "downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335834463s
May 22 07:15:03.592: INFO: Pod "downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.499367702s
STEP: Saw pod success
May 22 07:15:03.592: INFO: Pod "downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173" satisfied condition "Succeeded or Failed"
May 22 07:15:03.755: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173 container client-container: <nil>
STEP: delete the pod
May 22 07:15:04.088: INFO: Waiting for pod downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173 to disappear
May 22 07:15:04.255: INFO: Pod downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.485 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 78 lines ...
May 22 07:14:13.463: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathqn5tx] to have phase Bound
May 22 07:14:13.622: INFO: PersistentVolumeClaim csi-hostpathqn5tx found but phase is Pending instead of Bound.
May 22 07:14:15.804: INFO: PersistentVolumeClaim csi-hostpathqn5tx found but phase is Pending instead of Bound.
May 22 07:14:17.968: INFO: PersistentVolumeClaim csi-hostpathqn5tx found and phase=Bound (4.505550857s)
STEP: Creating pod pod-subpath-test-dynamicpv-sd9b
STEP: Creating a pod to test subpath
May 22 07:14:18.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-sd9b" in namespace "provisioning-8714" to be "Succeeded or Failed"
May 22 07:14:18.609: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 159.626792ms
May 22 07:14:20.766: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317186808s
May 22 07:14:22.925: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47613098s
May 22 07:14:25.083: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634230938s
May 22 07:14:27.249: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800020769s
May 22 07:14:29.406: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.957154858s
May 22 07:14:31.564: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.114619469s
May 22 07:14:33.726: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.277018895s
May 22 07:14:35.884: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.43485844s
May 22 07:14:38.043: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.593945243s
May 22 07:14:40.200: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.75104828s
May 22 07:14:42.358: INFO: Pod "pod-subpath-test-dynamicpv-sd9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.908862817s
STEP: Saw pod success
May 22 07:14:42.358: INFO: Pod "pod-subpath-test-dynamicpv-sd9b" satisfied condition "Succeeded or Failed"
May 22 07:14:42.515: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-sd9b container test-container-subpath-dynamicpv-sd9b: <nil>
STEP: delete the pod
May 22 07:14:42.855: INFO: Waiting for pod pod-subpath-test-dynamicpv-sd9b to disappear
May 22 07:14:43.012: INFO: Pod pod-subpath-test-dynamicpv-sd9b no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-sd9b
May 22 07:14:43.012: INFO: Deleting pod "pod-subpath-test-dynamicpv-sd9b" in namespace "provisioning-8714"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":6,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:11.820: INFO: Only supported for providers [gce gke] (not aws)
... skipping 102 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:14:51.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 50 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:436
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:11.962: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
STEP: Destroying namespace "services-7356" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":4,"skipped":29,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 407 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when the NodeLease feature is enabled
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49
    the kubelet should report node status infrequently
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":5,"skipped":23,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:14.123: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
May 22 07:14:50.459: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [nfsdsmq9] to have phase Bound
May 22 07:14:50.623: INFO: PersistentVolumeClaim nfsdsmq9 found but phase is Pending instead of Bound.
May 22 07:14:52.785: INFO: PersistentVolumeClaim nfsdsmq9 found but phase is Pending instead of Bound.
May 22 07:14:54.945: INFO: PersistentVolumeClaim nfsdsmq9 found and phase=Bound (4.486687889s)
STEP: Creating pod exec-volume-test-dynamicpv-xvsw
STEP: Creating a pod to test exec-volume-test
May 22 07:14:55.428: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-xvsw" in namespace "volume-5764" to be "Succeeded or Failed"
May 22 07:14:55.591: INFO: Pod "exec-volume-test-dynamicpv-xvsw": Phase="Pending", Reason="", readiness=false. Elapsed: 163.39875ms
May 22 07:14:57.753: INFO: Pod "exec-volume-test-dynamicpv-xvsw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324853941s
May 22 07:14:59.915: INFO: Pod "exec-volume-test-dynamicpv-xvsw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.486839671s
STEP: Saw pod success
May 22 07:14:59.915: INFO: Pod "exec-volume-test-dynamicpv-xvsw" satisfied condition "Succeeded or Failed"
May 22 07:15:00.076: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod exec-volume-test-dynamicpv-xvsw container exec-container-dynamicpv-xvsw: <nil>
STEP: delete the pod
May 22 07:15:00.408: INFO: Waiting for pod exec-volume-test-dynamicpv-xvsw to disappear
May 22 07:15:00.569: INFO: Pod exec-volume-test-dynamicpv-xvsw no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-xvsw
May 22 07:15:00.569: INFO: Deleting pod "exec-volume-test-dynamicpv-xvsw" in namespace "volume-5764"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:14.210: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 249 lines ...
• [SLOW TEST:67.759 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":10,"skipped":97,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:16.926: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:15:12.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7" in namespace "downward-api-8258" to be "Succeeded or Failed"
May 22 07:15:13.078: INFO: Pod "downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7": Phase="Pending", Reason="", readiness=false. Elapsed: 155.544499ms
May 22 07:15:15.234: INFO: Pod "downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311551282s
May 22 07:15:17.391: INFO: Pod "downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.468919401s
STEP: Saw pod success
May 22 07:15:17.391: INFO: Pod "downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7" satisfied condition "Succeeded or Failed"
May 22 07:15:17.547: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7 container client-container: <nil>
STEP: delete the pod
May 22 07:15:17.868: INFO: Waiting for pod downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7 to disappear
May 22 07:15:18.024: INFO: Pod downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.361 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:18.359: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
May 22 07:14:43.206: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-6895wz6tr
STEP: creating a claim
May 22 07:14:43.366: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-jpr2
STEP: Creating a pod to test exec-volume-test
May 22 07:14:43.851: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-jpr2" in namespace "volume-6895" to be "Succeeded or Failed"
May 22 07:14:44.014: INFO: Pod "exec-volume-test-dynamicpv-jpr2": Phase="Pending", Reason="", readiness=false. Elapsed: 162.911495ms
May 22 07:14:46.175: INFO: Pod "exec-volume-test-dynamicpv-jpr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32355013s
May 22 07:14:48.334: INFO: Pod "exec-volume-test-dynamicpv-jpr2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482993002s
May 22 07:14:50.494: INFO: Pod "exec-volume-test-dynamicpv-jpr2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642769398s
May 22 07:14:52.655: INFO: Pod "exec-volume-test-dynamicpv-jpr2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80344347s
May 22 07:14:54.815: INFO: Pod "exec-volume-test-dynamicpv-jpr2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.964006958s
May 22 07:14:56.975: INFO: Pod "exec-volume-test-dynamicpv-jpr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.123421578s
STEP: Saw pod success
May 22 07:14:56.975: INFO: Pod "exec-volume-test-dynamicpv-jpr2" satisfied condition "Succeeded or Failed"
May 22 07:14:57.134: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod exec-volume-test-dynamicpv-jpr2 container exec-container-dynamicpv-jpr2: <nil>
STEP: delete the pod
May 22 07:14:57.467: INFO: Waiting for pod exec-volume-test-dynamicpv-jpr2 to disappear
May 22 07:14:57.626: INFO: Pod exec-volume-test-dynamicpv-jpr2 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-jpr2
May 22 07:14:57.626: INFO: Deleting pod "exec-volume-test-dynamicpv-jpr2" in namespace "volume-6895"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:15:19.571: INFO: >>> kubeConfig: /root/.kube/config
... skipping 25 lines ...
      Driver hostPath on volume type InlineVolume doesn't support readOnly source

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:398
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":35,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:14:59.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 157 lines ...
May 22 07:15:06.032: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
May 22 07:15:06.032: INFO: Waiting for all frontend pods to be Running.
May 22 07:15:11.234: INFO: Waiting for frontend to serve content.
May 22 07:15:11.395: INFO: Trying to add a new entry to the guestbook.
May 22 07:15:11.560: INFO: Verifying that added entry can be retrieved.
May 22 07:15:11.719: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
May 22 07:15:16.881: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8916 delete --grace-period=0 --force -f -'
May 22 07:15:17.620: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 22 07:15:17.620: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
May 22 07:15:17.620: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8916 delete --grace-period=0 --force -f -'
... skipping 26 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:21.751: INFO: Only supported for providers [azure] (not aws)
... skipping 101 lines ...
May 22 07:14:48.793: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 22 07:14:48.794: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 22 07:14:48.794: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-721v5xs4      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:example.com/nfs-provisioning-721,Parameters:map[string]string{mountOptions: vers=4.1,},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-721    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-721v5xs4,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-721    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-721v5xs4,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-721    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-721v5xs4,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May 22 07:14:49.762: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-7ft2f" in namespace "provisioning-721" to be "Succeeded or Failed"
May 22 07:14:49.927: INFO: Pod "pvc-volume-tester-writer-7ft2f": Phase="Pending", Reason="", readiness=false. Elapsed: 164.786047ms
May 22 07:14:52.088: INFO: Pod "pvc-volume-tester-writer-7ft2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326168654s
May 22 07:14:54.250: INFO: Pod "pvc-volume-tester-writer-7ft2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488020543s
May 22 07:14:56.412: INFO: Pod "pvc-volume-tester-writer-7ft2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649594706s
May 22 07:14:58.574: INFO: Pod "pvc-volume-tester-writer-7ft2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812516455s
May 22 07:15:00.741: INFO: Pod "pvc-volume-tester-writer-7ft2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.978935768s
STEP: Saw pod success
May 22 07:15:00.741: INFO: Pod "pvc-volume-tester-writer-7ft2f" satisfied condition "Succeeded or Failed"
May 22 07:15:01.072: INFO: Pod pvc-volume-tester-writer-7ft2f has the following logs: 
May 22 07:15:01.072: INFO: Deleting pod "pvc-volume-tester-writer-7ft2f" in namespace "provisioning-721"
May 22 07:15:01.248: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-7ft2f" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-35-65.ap-northeast-2.compute.internal"
May 22 07:15:01.891: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-nt9qs" in namespace "provisioning-721" to be "Succeeded or Failed"
May 22 07:15:02.051: INFO: Pod "pvc-volume-tester-reader-nt9qs": Phase="Pending", Reason="", readiness=false. Elapsed: 160.092818ms
May 22 07:15:04.212: INFO: Pod "pvc-volume-tester-reader-nt9qs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320715731s
May 22 07:15:06.373: INFO: Pod "pvc-volume-tester-reader-nt9qs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481937049s
May 22 07:15:08.547: INFO: Pod "pvc-volume-tester-reader-nt9qs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.655647922s
STEP: Saw pod success
May 22 07:15:08.547: INFO: Pod "pvc-volume-tester-reader-nt9qs" satisfied condition "Succeeded or Failed"
May 22 07:15:08.869: INFO: Pod pvc-volume-tester-reader-nt9qs has the following logs: hello world

May 22 07:15:08.869: INFO: Deleting pod "pvc-volume-tester-reader-nt9qs" in namespace "provisioning-721"
May 22 07:15:09.035: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-nt9qs" to be fully deleted
May 22 07:15:09.195: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tmdzw] to have phase Bound
May 22 07:15:09.355: INFO: PersistentVolumeClaim pvc-tmdzw found and phase=Bound (160.013078ms)
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":7,"skipped":70,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:6.562 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":7,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:10.746 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":5,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:24.655: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:25.412: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 5 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:15:22.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60eb009b-b5bd-4a99-bfeb-f0e0e80f4a84" in namespace "projected-2526" to be "Succeeded or Failed"
May 22 07:15:22.926: INFO: Pod "downwardapi-volume-60eb009b-b5bd-4a99-bfeb-f0e0e80f4a84": Phase="Pending", Reason="", readiness=false. Elapsed: 156.729638ms
May 22 07:15:25.083: INFO: Pod "downwardapi-volume-60eb009b-b5bd-4a99-bfeb-f0e0e80f4a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.314287161s
STEP: Saw pod success
May 22 07:15:25.083: INFO: Pod "downwardapi-volume-60eb009b-b5bd-4a99-bfeb-f0e0e80f4a84" satisfied condition "Succeeded or Failed"
May 22 07:15:25.244: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod downwardapi-volume-60eb009b-b5bd-4a99-bfeb-f0e0e80f4a84 container client-container: <nil>
STEP: delete the pod
May 22 07:15:25.565: INFO: Waiting for pod downwardapi-volume-60eb009b-b5bd-4a99-bfeb-f0e0e80f4a84 to disappear
May 22 07:15:25.722: INFO: Pod downwardapi-volume-60eb009b-b5bd-4a99-bfeb-f0e0e80f4a84 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:15:25.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2526" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:26.149: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 144 lines ...
• [SLOW TEST:10.749 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":11,"skipped":99,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:27.680: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:15:28.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5118" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":8,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:28.339: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 112 lines ...
• [SLOW TEST:7.939 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":7,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should implement legacy replacement when the update strategy is OnDelete
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:501
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":5,"skipped":58,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:29.344: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 60 lines ...
May 22 07:15:15.821: INFO: PersistentVolumeClaim pvc-bpdrw found but phase is Pending instead of Bound.
May 22 07:15:17.985: INFO: PersistentVolumeClaim pvc-bpdrw found and phase=Bound (4.491509412s)
May 22 07:15:17.985: INFO: Waiting up to 3m0s for PersistentVolume local-rpq7b to have phase Bound
May 22 07:15:18.150: INFO: PersistentVolume local-rpq7b found and phase=Bound (164.996815ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-z86z
STEP: Creating a pod to test subpath
May 22 07:15:18.650: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z86z" in namespace "provisioning-3271" to be "Succeeded or Failed"
May 22 07:15:18.813: INFO: Pod "pod-subpath-test-preprovisionedpv-z86z": Phase="Pending", Reason="", readiness=false. Elapsed: 163.020515ms
May 22 07:15:20.982: INFO: Pod "pod-subpath-test-preprovisionedpv-z86z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332497119s
May 22 07:15:23.164: INFO: Pod "pod-subpath-test-preprovisionedpv-z86z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.514144432s
May 22 07:15:25.330: INFO: Pod "pod-subpath-test-preprovisionedpv-z86z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679826595s
May 22 07:15:27.494: INFO: Pod "pod-subpath-test-preprovisionedpv-z86z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.844338529s
STEP: Saw pod success
May 22 07:15:27.494: INFO: Pod "pod-subpath-test-preprovisionedpv-z86z" satisfied condition "Succeeded or Failed"
May 22 07:15:27.657: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-z86z container test-container-subpath-preprovisionedpv-z86z: <nil>
STEP: delete the pod
May 22 07:15:28.002: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z86z to disappear
May 22 07:15:28.166: INFO: Pod pod-subpath-test-preprovisionedpv-z86z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-z86z
May 22 07:15:28.166: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z86z" in namespace "provisioning-3271"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:30.426: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
May 22 07:13:51.375: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-1936q7nvv      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1936    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1936q7nvv,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1936    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1936q7nvv,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1936    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1936q7nvv,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-1936q7nvv    aa3b8ec9-5bd8-4ef3-9abc-b3700e5b7b66 5523 0 2021-05-22 07:13:51 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-05-22 07:13:51 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-fgzxv pvc- provisioning-1936  96f51527-4827-4e97-8fb9-5f8055c1a7c5 5539 0 2021-05-22 07:13:52 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-22 07:13:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1936q7nvv,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-c7f1ec7a-3f77-4b0e-a2b0-0a2bd1e79a0f in namespace provisioning-1936
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May 22 07:14:11.128: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-kh62d" in namespace "provisioning-1936" to be "Succeeded or Failed"
May 22 07:14:11.292: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 163.172263ms
May 22 07:14:13.446: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317575137s
May 22 07:14:15.602: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473885671s
May 22 07:14:17.774: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.645463731s
May 22 07:14:19.928: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800113987s
May 22 07:14:22.084: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.956040442s
... skipping 7 lines ...
May 22 07:14:39.332: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.203663978s
May 22 07:14:41.507: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.378767986s
May 22 07:14:43.661: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.533054799s
May 22 07:14:45.818: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.689347791s
May 22 07:14:47.972: INFO: Pod "pvc-volume-tester-writer-kh62d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.843720869s
STEP: Saw pod success
May 22 07:14:47.972: INFO: Pod "pvc-volume-tester-writer-kh62d" satisfied condition "Succeeded or Failed"
May 22 07:14:48.287: INFO: Pod pvc-volume-tester-writer-kh62d has the following logs: 
May 22 07:14:48.287: INFO: Deleting pod "pvc-volume-tester-writer-kh62d" in namespace "provisioning-1936"
May 22 07:14:48.448: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-kh62d" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-35-65.ap-northeast-2.compute.internal"
May 22 07:14:49.084: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-j28pg" in namespace "provisioning-1936" to be "Succeeded or Failed"
May 22 07:14:49.240: INFO: Pod "pvc-volume-tester-reader-j28pg": Phase="Pending", Reason="", readiness=false. Elapsed: 155.356266ms
May 22 07:14:51.395: INFO: Pod "pvc-volume-tester-reader-j28pg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310384847s
May 22 07:14:53.550: INFO: Pod "pvc-volume-tester-reader-j28pg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.465057191s
May 22 07:14:55.704: INFO: Pod "pvc-volume-tester-reader-j28pg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.619527287s
May 22 07:14:57.862: INFO: Pod "pvc-volume-tester-reader-j28pg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.77763072s
May 22 07:15:00.018: INFO: Pod "pvc-volume-tester-reader-j28pg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.933697513s
May 22 07:15:02.174: INFO: Pod "pvc-volume-tester-reader-j28pg": Phase="Pending", Reason="", readiness=false. Elapsed: 13.089006035s
May 22 07:15:04.368: INFO: Pod "pvc-volume-tester-reader-j28pg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.283460244s
STEP: Saw pod success
May 22 07:15:04.368: INFO: Pod "pvc-volume-tester-reader-j28pg" satisfied condition "Succeeded or Failed"
May 22 07:15:04.678: INFO: Pod pvc-volume-tester-reader-j28pg has the following logs: hello world

May 22 07:15:04.678: INFO: Deleting pod "pvc-volume-tester-reader-j28pg" in namespace "provisioning-1936"
May 22 07:15:04.838: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-j28pg" to be fully deleted
May 22 07:15:04.992: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fgzxv] to have phase Bound
May 22 07:15:05.154: INFO: PersistentVolumeClaim pvc-fgzxv found and phase=Bound (162.132957ms)
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
May 22 07:14:42.755: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9561
May 22 07:14:42.918: INFO: creating *v1.StatefulSet: csi-mock-volumes-9561-5480/csi-mockplugin-attacher
May 22 07:14:43.081: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9561"
May 22 07:14:43.243: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9561 to register on node ip-172-20-35-65.ap-northeast-2.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
May 22 07:15:05.136: INFO: Error getting logs for pod inline-volume-mh2sn: the server rejected our request for an unknown reason (get pods inline-volume-mh2sn)
May 22 07:15:05.298: INFO: Deleting pod "inline-volume-mh2sn" in namespace "csi-mock-volumes-9561"
May 22 07:15:05.461: INFO: Wait up to 5m0s for pod "inline-volume-mh2sn" to be fully deleted
STEP: Deleting the previously created pod
May 22 07:15:09.787: INFO: Deleting pod "pvc-volume-tester-8p66t" in namespace "csi-mock-volumes-9561"
May 22 07:15:09.952: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8p66t" to be fully deleted
STEP: Checking CSI driver logs
May 22 07:15:14.440: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
May 22 07:15:14.440: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-8p66t
May 22 07:15:14.440: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-9561
May 22 07:15:14.440: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: f76a7f4b-4a0c-45f6-8162-9b3055353f67
May 22 07:15:14.440: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
May 22 07:15:14.440: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-98e4882863cb0af7dfdef50345338123ac857a846b42dd48acab4888dd74e1dc","target_path":"/var/lib/kubelet/pods/f76a7f4b-4a0c-45f6-8162-9b3055353f67/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-8p66t
May 22 07:15:14.440: INFO: Deleting pod "pvc-volume-tester-8p66t" in namespace "csi-mock-volumes-9561"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-9561
STEP: Waiting for namespaces [csi-mock-volumes-9561] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":5,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:33.095: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
May 22 07:15:28.501: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 22 07:15:28.501: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-mprm
STEP: Creating a pod to test subpath
May 22 07:15:28.669: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-mprm" in namespace "provisioning-5033" to be "Succeeded or Failed"
May 22 07:15:28.847: INFO: Pod "pod-subpath-test-inlinevolume-mprm": Phase="Pending", Reason="", readiness=false. Elapsed: 177.410621ms
May 22 07:15:31.008: INFO: Pod "pod-subpath-test-inlinevolume-mprm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338476293s
May 22 07:15:33.176: INFO: Pod "pod-subpath-test-inlinevolume-mprm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.506995206s
STEP: Saw pod success
May 22 07:15:33.176: INFO: Pod "pod-subpath-test-inlinevolume-mprm" satisfied condition "Succeeded or Failed"
May 22 07:15:33.341: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-mprm container test-container-subpath-inlinevolume-mprm: <nil>
STEP: delete the pod
May 22 07:15:33.680: INFO: Waiting for pod pod-subpath-test-inlinevolume-mprm to disappear
May 22 07:15:33.841: INFO: Pod pod-subpath-test-inlinevolume-mprm no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-mprm
May 22 07:15:33.841: INFO: Deleting pod "pod-subpath-test-inlinevolume-mprm" in namespace "provisioning-5033"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":12,"skipped":103,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:15:29.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
May 22 07:15:30.317: INFO: Waiting up to 5m0s for pod "downward-api-c6696eb4-cee7-48e7-affd-f24c51b3748d" in namespace "downward-api-54" to be "Succeeded or Failed"
May 22 07:15:30.474: INFO: Pod "downward-api-c6696eb4-cee7-48e7-affd-f24c51b3748d": Phase="Pending", Reason="", readiness=false. Elapsed: 156.716777ms
May 22 07:15:32.631: INFO: Pod "downward-api-c6696eb4-cee7-48e7-affd-f24c51b3748d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31429884s
May 22 07:15:34.789: INFO: Pod "downward-api-c6696eb4-cee7-48e7-affd-f24c51b3748d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.472198857s
STEP: Saw pod success
May 22 07:15:34.789: INFO: Pod "downward-api-c6696eb4-cee7-48e7-affd-f24c51b3748d" satisfied condition "Succeeded or Failed"
May 22 07:15:34.945: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod downward-api-c6696eb4-cee7-48e7-affd-f24c51b3748d container dapi-container: <nil>
STEP: delete the pod
May 22 07:15:35.267: INFO: Waiting for pod downward-api-c6696eb4-cee7-48e7-affd-f24c51b3748d to disappear
May 22 07:15:35.428: INFO: Pod downward-api-c6696eb4-cee7-48e7-affd-f24c51b3748d no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.374 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:35.766: INFO: Only supported for providers [gce gke] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 49 lines ...
• [SLOW TEST:13.224 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":8,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:36.783: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":14,"skipped":81,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:38.511: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
STEP: Destroying namespace "services-6504" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":9,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:15:35.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-2413/configmap-test-183ee419-d11f-46fd-8cee-c53c26a73f3f
STEP: Creating a pod to test consume configMaps
May 22 07:15:36.899: INFO: Waiting up to 5m0s for pod "pod-configmaps-211877c4-210c-4c72-8b21-ac48de934cbf" in namespace "configmap-2413" to be "Succeeded or Failed"
May 22 07:15:37.055: INFO: Pod "pod-configmaps-211877c4-210c-4c72-8b21-ac48de934cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 156.145634ms
May 22 07:15:39.218: INFO: Pod "pod-configmaps-211877c4-210c-4c72-8b21-ac48de934cbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.318805217s
STEP: Saw pod success
May 22 07:15:39.218: INFO: Pod "pod-configmaps-211877c4-210c-4c72-8b21-ac48de934cbf" satisfied condition "Succeeded or Failed"
May 22 07:15:39.374: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-configmaps-211877c4-210c-4c72-8b21-ac48de934cbf container env-test: <nil>
STEP: delete the pod
May 22 07:15:39.707: INFO: Waiting for pod pod-configmaps-211877c4-210c-4c72-8b21-ac48de934cbf to disappear
May 22 07:15:39.863: INFO: Pod pod-configmaps-211877c4-210c-4c72-8b21-ac48de934cbf no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:15:39.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2413" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:15:32.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
May 22 07:15:33.157: INFO: Waiting up to 5m0s for pod "test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a" in namespace "svcaccounts-2200" to be "Succeeded or Failed"
May 22 07:15:33.311: INFO: Pod "test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a": Phase="Pending", Reason="", readiness=false. Elapsed: 154.08348ms
May 22 07:15:35.471: INFO: Pod "test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313902481s
May 22 07:15:37.627: INFO: Pod "test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469511782s
May 22 07:15:39.782: INFO: Pod "test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.624973161s
STEP: Saw pod success
May 22 07:15:39.782: INFO: Pod "test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a" satisfied condition "Succeeded or Failed"
May 22 07:15:39.936: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a container agnhost-container: <nil>
STEP: delete the pod
May 22 07:15:40.251: INFO: Waiting for pod test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a to disappear
May 22 07:15:40.405: INFO: Pod test-pod-af5e2e3a-a0cb-423a-9251-6c99abe0ad7a no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.488 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:40.726: INFO: Driver "nfs" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 193 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":5,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:15:28.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-ab440d01-d708-459e-902a-445fea10c10f
STEP: Creating a pod to test consume configMaps
May 22 07:15:29.553: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb" in namespace "projected-9985" to be "Succeeded or Failed"
May 22 07:15:29.712: INFO: Pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb": Phase="Pending", Reason="", readiness=false. Elapsed: 159.032419ms
May 22 07:15:31.869: INFO: Pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31605156s
May 22 07:15:34.026: INFO: Pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473688951s
May 22 07:15:36.184: INFO: Pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.631734248s
May 22 07:15:38.343: INFO: Pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.790641325s
May 22 07:15:40.500: INFO: Pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.947863607s
May 22 07:15:42.657: INFO: Pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.104822672s
STEP: Saw pod success
May 22 07:15:42.658: INFO: Pod "pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb" satisfied condition "Succeeded or Failed"
May 22 07:15:42.814: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb container agnhost-container: <nil>
STEP: delete the pod
May 22 07:15:43.134: INFO: Waiting for pod pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb to disappear
May 22 07:15:43.293: INFO: Pod pod-projected-configmaps-ee49bc0b-2b38-4598-bb3b-9bac5d9c29eb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.161 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:15:33.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
May 22 07:15:34.103: INFO: Waiting up to 5m0s for pod "pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5" in namespace "emptydir-1055" to be "Succeeded or Failed"
May 22 07:15:34.266: INFO: Pod "pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 162.809403ms
May 22 07:15:36.429: INFO: Pod "pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325971068s
May 22 07:15:38.591: INFO: Pod "pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48804177s
May 22 07:15:40.755: INFO: Pod "pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.652103398s
May 22 07:15:42.918: INFO: Pod "pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.815120841s
STEP: Saw pod success
May 22 07:15:42.918: INFO: Pod "pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5" satisfied condition "Succeeded or Failed"
May 22 07:15:43.083: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5 container test-container: <nil>
STEP: delete the pod
May 22 07:15:43.431: INFO: Waiting for pod pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5 to disappear
May 22 07:15:43.596: INFO: Pod pod-c4b7ece9-ed87-4181-99d4-8d3b6eb1e2f5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.797 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":6,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:46.775: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 71 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:15:44.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd6a267c-46ec-4263-8c57-360b89b958d8" in namespace "downward-api-5824" to be "Succeeded or Failed"
May 22 07:15:44.732: INFO: Pod "downwardapi-volume-cd6a267c-46ec-4263-8c57-360b89b958d8": Phase="Pending", Reason="", readiness=false. Elapsed: 156.523097ms
May 22 07:15:46.890: INFO: Pod "downwardapi-volume-cd6a267c-46ec-4263-8c57-360b89b958d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.31488211s
STEP: Saw pod success
May 22 07:15:46.890: INFO: Pod "downwardapi-volume-cd6a267c-46ec-4263-8c57-360b89b958d8" satisfied condition "Succeeded or Failed"
May 22 07:15:47.052: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod downwardapi-volume-cd6a267c-46ec-4263-8c57-360b89b958d8 container client-container: <nil>
STEP: delete the pod
May 22 07:15:47.371: INFO: Waiting for pod downwardapi-volume-cd6a267c-46ec-4263-8c57-360b89b958d8 to disappear
May 22 07:15:47.528: INFO: Pod downwardapi-volume-cd6a267c-46ec-4263-8c57-360b89b958d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:15:47.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5824" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:47.866: INFO: Only supported for providers [azure] (not aws)
... skipping 87 lines ...
• [SLOW TEST:8.362 seconds]
[sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:50.588: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":8,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:52.879: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 70 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":66,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:15:54.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-6758" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":9,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:15:55.339: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":7,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:01.692: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":8,"skipped":70,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":13,"skipped":125,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:15:07.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
May 22 07:15:33.095: INFO: PersistentVolumeClaim pvc-nmhb2 found and phase=Bound (10.96188346s)
May 22 07:15:33.095: INFO: Waiting up to 3m0s for PersistentVolume nfs-ffrnk to have phase Bound
May 22 07:15:33.253: INFO: PersistentVolume nfs-ffrnk found and phase=Bound (157.402083ms)
STEP: Checking pod has write access to PersistentVolume
May 22 07:15:33.572: INFO: Creating nfs test pod
May 22 07:15:33.730: INFO: Pod should terminate with exitcode 0 (success)
May 22 07:15:33.730: INFO: Waiting up to 5m0s for pod "pvc-tester-fgq2x" in namespace "pv-8511" to be "Succeeded or Failed"
May 22 07:15:33.888: INFO: Pod "pvc-tester-fgq2x": Phase="Pending", Reason="", readiness=false. Elapsed: 157.531983ms
May 22 07:15:36.047: INFO: Pod "pvc-tester-fgq2x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316125631s
May 22 07:15:38.205: INFO: Pod "pvc-tester-fgq2x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474590334s
May 22 07:15:40.367: INFO: Pod "pvc-tester-fgq2x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636046525s
May 22 07:15:42.525: INFO: Pod "pvc-tester-fgq2x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.794496831s
STEP: Saw pod success
May 22 07:15:42.525: INFO: Pod "pvc-tester-fgq2x" satisfied condition "Succeeded or Failed"
May 22 07:15:42.525: INFO: Pod pvc-tester-fgq2x succeeded 
May 22 07:15:42.525: INFO: Deleting pod "pvc-tester-fgq2x" in namespace "pv-8511"
May 22 07:15:42.705: INFO: Wait up to 5m0s for pod "pvc-tester-fgq2x" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
May 22 07:15:42.863: INFO: Deleting PVC pvc-nmhb2 to trigger reclamation of PV nfs-ffrnk
May 22 07:15:42.863: INFO: Deleting PersistentVolumeClaim "pvc-nmhb2"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":14,"skipped":125,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:04.490: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 106 lines ...
• [SLOW TEST:9.108 seconds]
[sig-node] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":8,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:10.844: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:16:12.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-6645" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":8,"skipped":76,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
May 22 07:16:01.370: INFO: PersistentVolumeClaim pvc-m9jxb found but phase is Pending instead of Bound.
May 22 07:16:03.526: INFO: PersistentVolumeClaim pvc-m9jxb found and phase=Bound (6.750554358s)
May 22 07:16:03.527: INFO: Waiting up to 3m0s for PersistentVolume local-w7qc4 to have phase Bound
May 22 07:16:03.684: INFO: PersistentVolume local-w7qc4 found and phase=Bound (157.074453ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mh6w
STEP: Creating a pod to test subpath
May 22 07:16:04.154: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mh6w" in namespace "provisioning-4127" to be "Succeeded or Failed"
May 22 07:16:04.311: INFO: Pod "pod-subpath-test-preprovisionedpv-mh6w": Phase="Pending", Reason="", readiness=false. Elapsed: 157.303771ms
May 22 07:16:06.468: INFO: Pod "pod-subpath-test-preprovisionedpv-mh6w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314104536s
May 22 07:16:08.626: INFO: Pod "pod-subpath-test-preprovisionedpv-mh6w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.472119917s
STEP: Saw pod success
May 22 07:16:08.626: INFO: Pod "pod-subpath-test-preprovisionedpv-mh6w" satisfied condition "Succeeded or Failed"
May 22 07:16:08.787: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-mh6w container test-container-subpath-preprovisionedpv-mh6w: <nil>
STEP: delete the pod
May 22 07:16:09.112: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mh6w to disappear
May 22 07:16:09.268: INFO: Pod pod-subpath-test-preprovisionedpv-mh6w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mh6w
May 22 07:16:09.268: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mh6w" in namespace "provisioning-4127"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:13.561: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 226 lines ...
May 22 07:16:00.063: INFO: PersistentVolumeClaim pvc-z7wwf found but phase is Pending instead of Bound.
May 22 07:16:02.266: INFO: PersistentVolumeClaim pvc-z7wwf found and phase=Bound (11.081661881s)
May 22 07:16:02.266: INFO: Waiting up to 3m0s for PersistentVolume local-br2hw to have phase Bound
May 22 07:16:02.474: INFO: PersistentVolume local-br2hw found and phase=Bound (207.630105ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dddl
STEP: Creating a pod to test subpath
May 22 07:16:02.974: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dddl" in namespace "provisioning-7807" to be "Succeeded or Failed"
May 22 07:16:03.131: INFO: Pod "pod-subpath-test-preprovisionedpv-dddl": Phase="Pending", Reason="", readiness=false. Elapsed: 156.269446ms
May 22 07:16:05.288: INFO: Pod "pod-subpath-test-preprovisionedpv-dddl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313705853s
May 22 07:16:07.446: INFO: Pod "pod-subpath-test-preprovisionedpv-dddl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471691196s
May 22 07:16:09.605: INFO: Pod "pod-subpath-test-preprovisionedpv-dddl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.630498477s
May 22 07:16:11.764: INFO: Pod "pod-subpath-test-preprovisionedpv-dddl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.78992073s
STEP: Saw pod success
May 22 07:16:11.764: INFO: Pod "pod-subpath-test-preprovisionedpv-dddl" satisfied condition "Succeeded or Failed"
May 22 07:16:11.920: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-dddl container test-container-subpath-preprovisionedpv-dddl: <nil>
STEP: delete the pod
May 22 07:16:12.255: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dddl to disappear
May 22 07:16:12.411: INFO: Pod pod-subpath-test-preprovisionedpv-dddl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dddl
May 22 07:16:12.411: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dddl" in namespace "provisioning-7807"
... skipping 43 lines ...
May 22 07:16:00.967: INFO: PersistentVolumeClaim pvc-zk7rv found but phase is Pending instead of Bound.
May 22 07:16:03.127: INFO: PersistentVolumeClaim pvc-zk7rv found and phase=Bound (2.319475446s)
May 22 07:16:03.127: INFO: Waiting up to 3m0s for PersistentVolume local-cjz7w to have phase Bound
May 22 07:16:03.287: INFO: PersistentVolume local-cjz7w found and phase=Bound (159.430751ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wpvg
STEP: Creating a pod to test subpath
May 22 07:16:03.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wpvg" in namespace "provisioning-632" to be "Succeeded or Failed"
May 22 07:16:03.956: INFO: Pod "pod-subpath-test-preprovisionedpv-wpvg": Phase="Pending", Reason="", readiness=false. Elapsed: 159.32357ms
May 22 07:16:06.116: INFO: Pod "pod-subpath-test-preprovisionedpv-wpvg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319428723s
May 22 07:16:08.276: INFO: Pod "pod-subpath-test-preprovisionedpv-wpvg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479058163s
May 22 07:16:10.436: INFO: Pod "pod-subpath-test-preprovisionedpv-wpvg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.639392543s
May 22 07:16:12.597: INFO: Pod "pod-subpath-test-preprovisionedpv-wpvg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.799948904s
STEP: Saw pod success
May 22 07:16:12.597: INFO: Pod "pod-subpath-test-preprovisionedpv-wpvg" satisfied condition "Succeeded or Failed"
May 22 07:16:12.756: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-wpvg container test-container-volume-preprovisionedpv-wpvg: <nil>
STEP: delete the pod
May 22 07:16:13.089: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wpvg to disappear
May 22 07:16:13.249: INFO: Pod pod-subpath-test-preprovisionedpv-wpvg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wpvg
May 22 07:16:13.249: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wpvg" in namespace "provisioning-632"
... skipping 72 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":40,"failed":0}
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:16:15.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 160 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":8,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:16.877: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 261 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:17.058: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
• [SLOW TEST:6.276 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":60,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:17.181: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 128 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:16:18.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5066" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":11,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:18.548: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
• [SLOW TEST:64.775 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted by liveness probe after startup probe enables it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":6,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:19.011: INFO: Only supported for providers [vsphere] (not aws)
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:16:19.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3649" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":11,"skipped":98,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:16:15.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d595ce8f-2add-4e8a-b4f3-942f820be78a" in namespace "projected-5518" to be "Succeeded or Failed"
May 22 07:16:15.571: INFO: Pod "downwardapi-volume-d595ce8f-2add-4e8a-b4f3-942f820be78a": Phase="Pending", Reason="", readiness=false. Elapsed: 157.202508ms
May 22 07:16:17.726: INFO: Pod "downwardapi-volume-d595ce8f-2add-4e8a-b4f3-942f820be78a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312074778s
May 22 07:16:19.885: INFO: Pod "downwardapi-volume-d595ce8f-2add-4e8a-b4f3-942f820be78a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.471464804s
STEP: Saw pod success
May 22 07:16:19.885: INFO: Pod "downwardapi-volume-d595ce8f-2add-4e8a-b4f3-942f820be78a" satisfied condition "Succeeded or Failed"
May 22 07:16:20.039: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod downwardapi-volume-d595ce8f-2add-4e8a-b4f3-942f820be78a container client-container: <nil>
STEP: delete the pod
May 22 07:16:20.355: INFO: Waiting for pod downwardapi-volume-d595ce8f-2add-4e8a-b4f3-942f820be78a to disappear
May 22 07:16:20.511: INFO: Pod downwardapi-volume-d595ce8f-2add-4e8a-b4f3-942f820be78a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.344 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":51,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":74,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:16:14.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460
    should add annotations for pods in rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:21.528: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 77 lines ...
• [SLOW TEST:11.375 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:24.974: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
May 22 07:16:21.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 22 07:16:22.542: INFO: Waiting up to 5m0s for pod "pod-72c8666b-ff59-46f3-b50c-470087bb8a3c" in namespace "emptydir-7925" to be "Succeeded or Failed"
May 22 07:16:22.701: INFO: Pod "pod-72c8666b-ff59-46f3-b50c-470087bb8a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 158.481911ms
May 22 07:16:24.858: INFO: Pod "pod-72c8666b-ff59-46f3-b50c-470087bb8a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315910155s
May 22 07:16:27.018: INFO: Pod "pod-72c8666b-ff59-46f3-b50c-470087bb8a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.475898644s
STEP: Saw pod success
May 22 07:16:27.018: INFO: Pod "pod-72c8666b-ff59-46f3-b50c-470087bb8a3c" satisfied condition "Succeeded or Failed"
May 22 07:16:27.175: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-72c8666b-ff59-46f3-b50c-470087bb8a3c container test-container: <nil>
STEP: delete the pod
May 22 07:16:27.508: INFO: Waiting for pod pod-72c8666b-ff59-46f3-b50c-470087bb8a3c to disappear
May 22 07:16:27.666: INFO: Pod pod-72c8666b-ff59-46f3-b50c-470087bb8a3c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.428 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
May 22 07:16:21.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0" in namespace "projected-984" to be "Succeeded or Failed"
May 22 07:16:21.309: INFO: Pod "downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0": Phase="Pending", Reason="", readiness=false. Elapsed: 256.726178ms
May 22 07:16:23.466: INFO: Pod "downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414079351s
May 22 07:16:25.624: INFO: Pod "downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0": Phase="Running", Reason="", readiness=true. Elapsed: 4.571517091s
May 22 07:16:27.784: INFO: Pod "downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.731396s
STEP: Saw pod success
May 22 07:16:27.784: INFO: Pod "downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0" satisfied condition "Succeeded or Failed"
May 22 07:16:27.940: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0 container client-container: <nil>
STEP: delete the pod
May 22 07:16:28.262: INFO: Waiting for pod downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0 to disappear
May 22 07:16:28.419: INFO: Pod downwardapi-volume-7126ba10-1c5b-4f0a-8438-88a1d35ebad0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.643 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":100,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:28.755: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 90 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":10,"skipped":50,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:28.088 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":9,"skipped":75,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
May 22 07:14:49.203: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3793
May 22 07:14:49.364: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3793
May 22 07:14:49.526: INFO: creating *v1.StatefulSet: csi-mock-volumes-3793-8037/csi-mockplugin
May 22 07:14:49.690: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3793
May 22 07:14:49.851: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3793"
May 22 07:14:50.011: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3793 to register on node ip-172-20-49-129.ap-northeast-2.compute.internal
I0522 07:14:57.897571    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0522 07:14:58.061202    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3793","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0522 07:14:58.228112    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0522 07:14:58.406385    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0522 07:14:58.721931    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3793","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0522 07:14:59.683017    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3793"},"Error":"","FullError":null}
STEP: Creating pod
May 22 07:15:00.473: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 22 07:15:00.638: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tfbjx] to have phase Bound
I0522 07:15:00.647324    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
May 22 07:15:00.799: INFO: PersistentVolumeClaim pvc-tfbjx found but phase is Pending instead of Bound.
I0522 07:15:00.812201    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c"}}},"Error":"","FullError":null}
May 22 07:15:02.963: INFO: PersistentVolumeClaim pvc-tfbjx found and phase=Bound (2.325053895s)
I0522 07:15:03.667003    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
May 22 07:15:03.827: INFO: >>> kubeConfig: /root/.kube/config
I0522 07:15:04.963633    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c","storage.kubernetes.io/csiProvisionerIdentity":"1621667698495-8081-csi-mock-csi-mock-volumes-3793"}},"Response":{},"Error":"","FullError":null}
I0522 07:15:05.127100    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
May 22 07:15:05.285: INFO: >>> kubeConfig: /root/.kube/config
May 22 07:15:06.291: INFO: >>> kubeConfig: /root/.kube/config
May 22 07:15:07.300: INFO: >>> kubeConfig: /root/.kube/config
I0522 07:15:08.336494    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c/globalmount","target_path":"/var/lib/kubelet/pods/3389d300-eba2-4788-ba1c-d7e0569490a3/volumes/kubernetes.io~csi/pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c","storage.kubernetes.io/csiProvisionerIdentity":"1621667698495-8081-csi-mock-csi-mock-volumes-3793"}},"Response":{},"Error":"","FullError":null}
May 22 07:15:11.769: INFO: Deleting pod "pvc-volume-tester-j879d" in namespace "csi-mock-volumes-3793"
May 22 07:15:11.930: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j879d" to be fully deleted
May 22 07:15:15.748: INFO: >>> kubeConfig: /root/.kube/config
I0522 07:15:16.805457    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3389d300-eba2-4788-ba1c-d7e0569490a3/volumes/kubernetes.io~csi/pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c/mount"},"Response":{},"Error":"","FullError":null}
I0522 07:15:17.053780    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0522 07:15:17.216046    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c/globalmount"},"Response":{},"Error":"","FullError":null}
I0522 07:15:24.447469    4891 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
May 22 07:15:25.424: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tfbjx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3793", SelfLink:"", UID:"0e3f0e2a-ba23-4115-944b-17df3f25980c", ResourceVersion:"8926", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264500, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0021c4a50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021c4a68)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0021a8fe0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0021a8ff0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
May 22 07:15:25.424: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tfbjx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3793", SelfLink:"", UID:"0e3f0e2a-ba23-4115-944b-17df3f25980c", ResourceVersion:"8927", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264500, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3793"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0023901c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0023901e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0023901f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002390210)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001acc7c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001acc7d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
May 22 07:15:25.424: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tfbjx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3793", SelfLink:"", UID:"0e3f0e2a-ba23-4115-944b-17df3f25980c", ResourceVersion:"8951", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264500, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3793"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002390de0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002390df8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002390e10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002390e28)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c", StorageClassName:(*string)(0xc0032848e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0032848f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
May 22 07:15:25.424: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tfbjx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3793", SelfLink:"", UID:"0e3f0e2a-ba23-4115-944b-17df3f25980c", ResourceVersion:"8952", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264500, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3793"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002390e58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002390e70)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002390e88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002390ea0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c", StorageClassName:(*string)(0xc003284920), VolumeMode:(*v1.PersistentVolumeMode)(0xc003284930), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
May 22 07:15:25.424: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tfbjx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3793", SelfLink:"", UID:"0e3f0e2a-ba23-4115-944b-17df3f25980c", ResourceVersion:"10082", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264500, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(0xc002390ed0), DeletionGracePeriodSeconds:(*int64)(0xc00329f868), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3793"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002390ee8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002390f00)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002390f18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002390f30)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-0e3f0e2a-ba23-4115-944b-17df3f25980c", StorageClassName:(*string)(0xc003284970), VolumeMode:(*v1.PersistentVolumeMode)(0xc003284980), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":7,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":12,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:33.479: INFO: Only supported for providers [gce gke] (not aws)
... skipping 146 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":13,"skipped":104,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 123 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:37.848: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 100 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":7,"skipped":12,"failed":0}
[BeforeEach] [sig-api-machinery] server version
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:16:38.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename server-version
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:16:39.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-6631" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":8,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:39.993: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 63 lines ...
• [SLOW TEST:6.833 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":111,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:42.558: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 14 lines ...
May 22 07:16:37.407: INFO: Creating a PV followed by a PVC
May 22 07:16:37.722: INFO: Waiting for PV local-pvqxsg6 to bind to PVC pvc-ns47h
May 22 07:16:37.722: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ns47h] to have phase Bound
May 22 07:16:37.885: INFO: PersistentVolumeClaim pvc-ns47h found and phase=Bound (162.864918ms)
May 22 07:16:37.885: INFO: Waiting up to 3m0s for PersistentVolume local-pvqxsg6 to have phase Bound
May 22 07:16:38.044: INFO: PersistentVolume local-pvqxsg6 found and phase=Bound (158.953155ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
STEP: Initializing test volumes
May 22 07:16:38.361: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9b160bb3-dbe5-4042-92ee-e84b4f2f4fab] Namespace:persistent-local-volumes-test-7740 PodName:hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-lsmm8 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
May 22 07:16:38.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:13.934 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":13,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:42.732: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":8,"skipped":42,"failed":0}
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:16:36.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":9,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
May 22 07:16:21.012: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:21.177: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:21.721: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:21.886: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:22.051: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:22.219: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:22.565: INFO: Lookups using dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local]

May 22 07:16:27.744: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:27.907: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:28.067: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:28.228: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:28.711: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:28.871: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:29.032: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:29.193: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:29.519: INFO: Lookups using dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local]

May 22 07:16:32.729: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:32.889: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:33.050: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:33.212: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:33.701: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:33.862: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:34.023: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:34.184: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:34.507: INFO: Lookups using dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local]

May 22 07:16:37.729: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:37.890: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:38.052: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:38.213: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:38.703: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:38.864: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:39.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:39.186: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:39.508: INFO: Lookups using dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local]

May 22 07:16:42.728: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:42.889: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:43.050: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:43.211: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:43.702: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:43.863: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:44.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:44.193: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local from pod dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17: the server could not find the requested resource (get pods dns-test-65d76751-38e7-4421-ab64-00225e5dbf17)
May 22 07:16:44.514: INFO: Lookups using dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4772.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4772.svc.cluster.local jessie_udp@dns-test-service-2.dns-4772.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4772.svc.cluster.local]

May 22 07:16:49.519: INFO: DNS probes using dns-4772/dns-test-65d76751-38e7-4421-ab64-00225e5dbf17 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:37.278 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":9,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:50.203: INFO: Only supported for providers [gce gke] (not aws)
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":13,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:55.324: INFO: Only supported for providers [vsphere] (not aws)
... skipping 39 lines ...
• [SLOW TEST:13.224 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":14,"skipped":117,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:56.043: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:57.488: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 284 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:16:58.569: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
May 22 07:16:19.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
May 22 07:16:19.986: INFO: Waiting up to 5m0s for pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e" in namespace "svcaccounts-1226" to be "Succeeded or Failed"
May 22 07:16:20.147: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 160.438733ms
May 22 07:16:22.312: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325555589s
May 22 07:16:24.477: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.491016164s
May 22 07:16:26.643: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.656225653s
May 22 07:16:28.804: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817656949s
May 22 07:16:30.966: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.979792331s
STEP: Saw pod success
May 22 07:16:30.966: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e" satisfied condition "Succeeded or Failed"
May 22 07:16:31.127: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e container agnhost-container: <nil>
STEP: delete the pod
May 22 07:16:31.455: INFO: Waiting for pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e to disappear
May 22 07:16:31.615: INFO: Pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e no longer exists
STEP: Creating a pod to test service account token: 
May 22 07:16:31.776: INFO: Waiting up to 5m0s for pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e" in namespace "svcaccounts-1226" to be "Succeeded or Failed"
May 22 07:16:31.937: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 160.624688ms
May 22 07:16:34.100: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323892876s
May 22 07:16:36.262: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485969209s
May 22 07:16:38.424: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.648089471s
STEP: Saw pod success
May 22 07:16:38.424: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e" satisfied condition "Succeeded or Failed"
May 22 07:16:38.585: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e container agnhost-container: <nil>
STEP: delete the pod
May 22 07:16:38.911: INFO: Waiting for pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e to disappear
May 22 07:16:39.071: INFO: Pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e no longer exists
STEP: Creating a pod to test service account token: 
May 22 07:16:39.233: INFO: Waiting up to 5m0s for pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e" in namespace "svcaccounts-1226" to be "Succeeded or Failed"
May 22 07:16:39.393: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 160.507456ms
May 22 07:16:41.556: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323486843s
May 22 07:16:43.721: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48784454s
May 22 07:16:45.883: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649772298s
May 22 07:16:48.044: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.811276944s
STEP: Saw pod success
May 22 07:16:48.044: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e" satisfied condition "Succeeded or Failed"
May 22 07:16:48.209: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e container agnhost-container: <nil>
STEP: delete the pod
May 22 07:16:48.546: INFO: Waiting for pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e to disappear
May 22 07:16:48.706: INFO: Pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e no longer exists
STEP: Creating a pod to test service account token: 
May 22 07:16:48.868: INFO: Waiting up to 5m0s for pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e" in namespace "svcaccounts-1226" to be "Succeeded or Failed"
May 22 07:16:49.028: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 160.506497ms
May 22 07:16:51.190: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321818465s
May 22 07:16:53.350: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482495825s
May 22 07:16:55.516: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.647917579s
May 22 07:16:57.685: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.817036471s
STEP: Saw pod success
May 22 07:16:57.685: INFO: Pod "test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e" satisfied condition "Succeeded or Failed"
May 22 07:16:57.846: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e container agnhost-container: <nil>
STEP: delete the pod
May 22 07:16:58.174: INFO: Waiting for pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e to disappear
May 22 07:16:58.334: INFO: Pod test-pod-217b1eec-62d3-44d7-bf93-2a15970f607e no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:39.651 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 74 lines ...
May 22 07:16:30.243: INFO: PersistentVolumeClaim pvc-nwqrs found but phase is Pending instead of Bound.
May 22 07:16:32.398: INFO: PersistentVolumeClaim pvc-nwqrs found and phase=Bound (4.464581578s)
May 22 07:16:32.398: INFO: Waiting up to 3m0s for PersistentVolume local-kk8km to have phase Bound
May 22 07:16:32.552: INFO: PersistentVolume local-kk8km found and phase=Bound (154.270287ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-t4c2
STEP: Creating a pod to test atomic-volume-subpath
May 22 07:16:33.017: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-t4c2" in namespace "provisioning-9885" to be "Succeeded or Failed"
May 22 07:16:33.171: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Pending", Reason="", readiness=false. Elapsed: 154.193178ms
May 22 07:16:35.329: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312019557s
May 22 07:16:37.484: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467560445s
May 22 07:16:39.638: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621820707s
May 22 07:16:41.793: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Running", Reason="", readiness=true. Elapsed: 8.776764835s
May 22 07:16:43.948: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Running", Reason="", readiness=true. Elapsed: 10.931617412s
... skipping 2 lines ...
May 22 07:16:50.419: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Running", Reason="", readiness=true. Elapsed: 17.402387863s
May 22 07:16:52.574: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Running", Reason="", readiness=true. Elapsed: 19.557820984s
May 22 07:16:54.732: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Running", Reason="", readiness=true. Elapsed: 21.715052458s
May 22 07:16:56.890: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Running", Reason="", readiness=true. Elapsed: 23.872944329s
May 22 07:16:59.044: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.027503026s
STEP: Saw pod success
May 22 07:16:59.044: INFO: Pod "pod-subpath-test-preprovisionedpv-t4c2" satisfied condition "Succeeded or Failed"
May 22 07:16:59.199: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-t4c2 container test-container-subpath-preprovisionedpv-t4c2: <nil>
STEP: delete the pod
May 22 07:16:59.613: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-t4c2 to disappear
May 22 07:16:59.811: INFO: Pod pod-subpath-test-preprovisionedpv-t4c2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-t4c2
May 22 07:16:59.811: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-t4c2" in namespace "provisioning-9885"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:02.261: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":6,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:02.433: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver csi-hostpath doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":15,"skipped":113,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:17:00.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 43 lines ...
• [SLOW TEST:17.130 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":10,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:122.421 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:283
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":6,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
May 22 07:16:45.985: INFO: PersistentVolumeClaim pvc-98gxz found but phase is Pending instead of Bound.
May 22 07:16:48.142: INFO: PersistentVolumeClaim pvc-98gxz found and phase=Bound (6.631638569s)
May 22 07:16:48.142: INFO: Waiting up to 3m0s for PersistentVolume local-gxrwv to have phase Bound
May 22 07:16:48.300: INFO: PersistentVolume local-gxrwv found and phase=Bound (157.46557ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9vrm
STEP: Creating a pod to test subpath
May 22 07:16:48.775: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9vrm" in namespace "provisioning-1971" to be "Succeeded or Failed"
May 22 07:16:48.933: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm": Phase="Pending", Reason="", readiness=false. Elapsed: 158.050202ms
May 22 07:16:51.092: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317210583s
May 22 07:16:53.250: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475153474s
May 22 07:16:55.409: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.634446831s
STEP: Saw pod success
May 22 07:16:55.410: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm" satisfied condition "Succeeded or Failed"
May 22 07:16:55.567: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-9vrm container test-container-subpath-preprovisionedpv-9vrm: <nil>
STEP: delete the pod
May 22 07:16:55.889: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9vrm to disappear
May 22 07:16:56.046: INFO: Pod pod-subpath-test-preprovisionedpv-9vrm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9vrm
May 22 07:16:56.046: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9vrm" in namespace "provisioning-1971"
STEP: Creating pod pod-subpath-test-preprovisionedpv-9vrm
STEP: Creating a pod to test subpath
May 22 07:16:56.362: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9vrm" in namespace "provisioning-1971" to be "Succeeded or Failed"
May 22 07:16:56.521: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm": Phase="Pending", Reason="", readiness=false. Elapsed: 158.893676ms
May 22 07:16:58.686: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323276198s
May 22 07:17:00.844: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481685731s
May 22 07:17:03.015: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.652542804s
STEP: Saw pod success
May 22 07:17:03.015: INFO: Pod "pod-subpath-test-preprovisionedpv-9vrm" satisfied condition "Succeeded or Failed"
May 22 07:17:03.173: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-9vrm container test-container-subpath-preprovisionedpv-9vrm: <nil>
STEP: delete the pod
May 22 07:17:03.516: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9vrm to disappear
May 22 07:17:03.682: INFO: Pod pod-subpath-test-preprovisionedpv-9vrm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9vrm
May 22 07:17:03.682: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9vrm" in namespace "provisioning-1971"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":77,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:05.930: INFO: Only supported for providers [openstack] (not aws)
... skipping 83 lines ...
• [SLOW TEST:38.803 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":11,"skipped":82,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:06.849: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 57 lines ...
May 22 07:17:07.064: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.111 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 61 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351

      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:15:36.102: INFO: >>> kubeConfig: /root/.kube/config
... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":10,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:08.351: INFO: Only supported for providers [openstack] (not aws)
... skipping 72 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
May 22 07:16:59.497: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
May 22 07:16:59.497: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pqj4
STEP: Creating a pod to test subpath
May 22 07:16:59.774: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pqj4" in namespace "provisioning-5140" to be "Succeeded or Failed"
May 22 07:16:59.966: INFO: Pod "pod-subpath-test-inlinevolume-pqj4": Phase="Pending", Reason="", readiness=false. Elapsed: 191.550796ms
May 22 07:17:02.182: INFO: Pod "pod-subpath-test-inlinevolume-pqj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407690102s
May 22 07:17:04.344: INFO: Pod "pod-subpath-test-inlinevolume-pqj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569848089s
May 22 07:17:06.505: INFO: Pod "pod-subpath-test-inlinevolume-pqj4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.731366253s
May 22 07:17:08.667: INFO: Pod "pod-subpath-test-inlinevolume-pqj4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.892935783s
STEP: Saw pod success
May 22 07:17:08.667: INFO: Pod "pod-subpath-test-inlinevolume-pqj4" satisfied condition "Succeeded or Failed"
May 22 07:17:08.834: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-pqj4 container test-container-volume-inlinevolume-pqj4: <nil>
STEP: delete the pod
May 22 07:17:09.172: INFO: Waiting for pod pod-subpath-test-inlinevolume-pqj4 to disappear
May 22 07:17:09.333: INFO: Pod pod-subpath-test-inlinevolume-pqj4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pqj4
May 22 07:17:09.333: INFO: Deleting pod "pod-subpath-test-inlinevolume-pqj4" in namespace "provisioning-5140"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":61,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":14,"skipped":68,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:11.440: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 5 lines ...
May 22 07:17:08.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
May 22 07:17:09.386: INFO: Waiting up to 5m0s for pod "var-expansion-0662a9d3-225c-4610-8dbb-a5a41cc187cf" in namespace "var-expansion-7750" to be "Succeeded or Failed"
May 22 07:17:09.549: INFO: Pod "var-expansion-0662a9d3-225c-4610-8dbb-a5a41cc187cf": Phase="Pending", Reason="", readiness=false. Elapsed: 162.919ms
May 22 07:17:11.712: INFO: Pod "var-expansion-0662a9d3-225c-4610-8dbb-a5a41cc187cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326199966s
May 22 07:17:13.907: INFO: Pod "var-expansion-0662a9d3-225c-4610-8dbb-a5a41cc187cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.521413562s
STEP: Saw pod success
May 22 07:17:13.907: INFO: Pod "var-expansion-0662a9d3-225c-4610-8dbb-a5a41cc187cf" satisfied condition "Succeeded or Failed"
May 22 07:17:14.114: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod var-expansion-0662a9d3-225c-4610-8dbb-a5a41cc187cf container dapi-container: <nil>
STEP: delete the pod
May 22 07:17:14.495: INFO: Waiting for pod var-expansion-0662a9d3-225c-4610-8dbb-a5a41cc187cf to disappear
May 22 07:17:14.659: INFO: Pod var-expansion-0662a9d3-225c-4610-8dbb-a5a41cc187cf no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.612 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":11,"skipped":73,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:15.036: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 255 lines ...
• [SLOW TEST:19.800 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:18.387: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 199 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":6,"skipped":23,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":16,"skipped":113,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:17:02.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":7,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:17:06.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
May 22 07:17:07.778: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 22 07:17:08.095: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-425" in namespace "provisioning-425" to be "Succeeded or Failed"
May 22 07:17:08.251: INFO: Pod "hostpath-symlink-prep-provisioning-425": Phase="Pending", Reason="", readiness=false. Elapsed: 156.137955ms
May 22 07:17:10.407: INFO: Pod "hostpath-symlink-prep-provisioning-425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.312462708s
STEP: Saw pod success
May 22 07:17:10.407: INFO: Pod "hostpath-symlink-prep-provisioning-425" satisfied condition "Succeeded or Failed"
May 22 07:17:10.407: INFO: Deleting pod "hostpath-symlink-prep-provisioning-425" in namespace "provisioning-425"
May 22 07:17:10.568: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-425" to be fully deleted
May 22 07:17:10.724: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-dj77
STEP: Creating a pod to test subpath
May 22 07:17:10.882: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-dj77" in namespace "provisioning-425" to be "Succeeded or Failed"
May 22 07:17:11.039: INFO: Pod "pod-subpath-test-inlinevolume-dj77": Phase="Pending", Reason="", readiness=false. Elapsed: 157.141737ms
May 22 07:17:13.200: INFO: Pod "pod-subpath-test-inlinevolume-dj77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317637733s
May 22 07:17:15.362: INFO: Pod "pod-subpath-test-inlinevolume-dj77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480222006s
May 22 07:17:17.526: INFO: Pod "pod-subpath-test-inlinevolume-dj77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.643855777s
May 22 07:17:19.683: INFO: Pod "pod-subpath-test-inlinevolume-dj77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.800661054s
STEP: Saw pod success
May 22 07:17:19.683: INFO: Pod "pod-subpath-test-inlinevolume-dj77" satisfied condition "Succeeded or Failed"
May 22 07:17:19.839: INFO: Trying to get logs from node ip-172-20-63-92.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-dj77 container test-container-subpath-inlinevolume-dj77: <nil>
STEP: delete the pod
May 22 07:17:20.160: INFO: Waiting for pod pod-subpath-test-inlinevolume-dj77 to disappear
May 22 07:17:20.315: INFO: Pod pod-subpath-test-inlinevolume-dj77 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-dj77
May 22 07:17:20.315: INFO: Deleting pod "pod-subpath-test-inlinevolume-dj77" in namespace "provisioning-425"
STEP: Deleting pod
May 22 07:17:20.471: INFO: Deleting pod "pod-subpath-test-inlinevolume-dj77" in namespace "provisioning-425"
May 22 07:17:20.786: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-425" in namespace "provisioning-425" to be "Succeeded or Failed"
May 22 07:17:20.942: INFO: Pod "hostpath-symlink-prep-provisioning-425": Phase="Pending", Reason="", readiness=false. Elapsed: 156.072509ms
May 22 07:17:23.099: INFO: Pod "hostpath-symlink-prep-provisioning-425": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31238655s
May 22 07:17:25.257: INFO: Pod "hostpath-symlink-prep-provisioning-425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.470566915s
STEP: Saw pod success
May 22 07:17:25.257: INFO: Pod "hostpath-symlink-prep-provisioning-425" satisfied condition "Succeeded or Failed"
May 22 07:17:25.257: INFO: Deleting pod "hostpath-symlink-prep-provisioning-425" in namespace "provisioning-425"
May 22 07:17:25.419: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-425" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 22 07:17:25.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-425" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":32,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":10,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:16:18.912: INFO: >>> kubeConfig: /root/.kube/config
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":11,"skipped":90,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:27.919: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 43 lines ...
May 22 07:15:44.649: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8262
May 22 07:15:44.811: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8262
May 22 07:15:44.970: INFO: creating *v1.StatefulSet: csi-mock-volumes-8262-8085/csi-mockplugin
May 22 07:15:45.127: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8262
May 22 07:15:45.285: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8262"
May 22 07:15:45.442: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8262 to register on node ip-172-20-49-129.ap-northeast-2.compute.internal
I0522 07:15:51.022333    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0522 07:15:51.192674    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8262","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0522 07:15:51.357261    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I0522 07:15:51.528991    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0522 07:15:51.839118    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8262","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0522 07:15:52.712867    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8262","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
May 22 07:15:56.051: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0522 07:15:56.627032    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0522 07:15:58.689738    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I0522 07:16:00.087430    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
May 22 07:16:00.269: INFO: >>> kubeConfig: /root/.kube/config
I0522 07:16:01.308371    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd","storage.kubernetes.io/csiProvisionerIdentity":"1621667751609-8081-csi-mock-csi-mock-volumes-8262"}},"Response":{},"Error":"","FullError":null}
I0522 07:16:01.593883    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
May 22 07:16:01.782: INFO: >>> kubeConfig: /root/.kube/config
May 22 07:16:02.903: INFO: >>> kubeConfig: /root/.kube/config
May 22 07:16:03.958: INFO: >>> kubeConfig: /root/.kube/config
I0522 07:16:05.168268    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd/globalmount","target_path":"/var/lib/kubelet/pods/317fa007-eb29-4a32-a2fa-085ac8c10de3/volumes/kubernetes.io~csi/pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd","storage.kubernetes.io/csiProvisionerIdentity":"1621667751609-8081-csi-mock-csi-mock-volumes-8262"}},"Response":{},"Error":"","FullError":null}
May 22 07:16:06.886: INFO: Deleting pod "pvc-volume-tester-djzd7" in namespace "csi-mock-volumes-8262"
May 22 07:16:07.043: INFO: Wait up to 5m0s for pod "pvc-volume-tester-djzd7" to be fully deleted
May 22 07:16:10.377: INFO: >>> kubeConfig: /root/.kube/config
I0522 07:16:11.414518    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/317fa007-eb29-4a32-a2fa-085ac8c10de3/volumes/kubernetes.io~csi/pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd/mount"},"Response":{},"Error":"","FullError":null}
I0522 07:16:11.586241    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0522 07:16:11.751875    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8899bafc-5d77-41d9-8ad0-5023e72206cd/globalmount"},"Response":{},"Error":"","FullError":null}
I0522 07:16:23.556525    4864 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
May 22 07:16:24.525: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7gjlp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8262", SelfLink:"", UID:"8899bafc-5d77-41d9-8ad0-5023e72206cd", ResourceVersion:"11820", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264556, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008cf1a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008cf1b8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0030afcb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0030afcc0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
May 22 07:16:24.525: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7gjlp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8262", SelfLink:"", UID:"8899bafc-5d77-41d9-8ad0-5023e72206cd", ResourceVersion:"11829", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264556, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-49-129.ap-northeast-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008cfbc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008cfbd8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008cfbf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008cfc08)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00316a0d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00316a0e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
May 22 07:16:24.526: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7gjlp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8262", SelfLink:"", UID:"8899bafc-5d77-41d9-8ad0-5023e72206cd", ResourceVersion:"11832", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264556, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8262", "volume.kubernetes.io/selected-node":"ip-172-20-49-129.ap-northeast-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c150), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c168)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c198)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c1e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c1f8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0031cc070), VolumeMode:(*v1.PersistentVolumeMode)(0xc0031cc080), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
May 22 07:16:24.526: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7gjlp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8262", SelfLink:"", UID:"8899bafc-5d77-41d9-8ad0-5023e72206cd", ResourceVersion:"11851", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264556, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8262"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c210), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c228)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c240), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c258)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c270), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c288)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0031cc0b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0031cc0c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
May 22 07:16:24.526: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7gjlp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8262", SelfLink:"", UID:"8899bafc-5d77-41d9-8ad0-5023e72206cd", ResourceVersion:"11910", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757264556, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8262", "volume.kubernetes.io/selected-node":"ip-172-20-49-129.ap-northeast-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c2b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c2d0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c2e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c300)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00262c318), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00262c330)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0031cc0f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0031cc100), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":15,"skipped":101,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:28.929: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 364 lines ...
• [SLOW TEST:11.643 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:33.523: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":17,"skipped":113,"failed":0}
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:17:22.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
May 22 07:17:23.773: INFO: Waiting up to 5m0s for pod "var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296" in namespace "var-expansion-4069" to be "Succeeded or Failed"
May 22 07:17:23.933: INFO: Pod "var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296": Phase="Pending", Reason="", readiness=false. Elapsed: 159.907902ms
May 22 07:17:26.093: INFO: Pod "var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320245825s
May 22 07:17:28.254: INFO: Pod "var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480975118s
May 22 07:17:30.414: INFO: Pod "var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641349401s
May 22 07:17:32.580: INFO: Pod "var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.807013992s
STEP: Saw pod success
May 22 07:17:32.580: INFO: Pod "var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296" satisfied condition "Succeeded or Failed"
May 22 07:17:32.744: INFO: Trying to get logs from node ip-172-20-35-65.ap-northeast-2.compute.internal pod var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296 container dapi-container: <nil>
STEP: delete the pod
May 22 07:17:33.089: INFO: Waiting for pod var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296 to disappear
May 22 07:17:33.249: INFO: Pod var-expansion-6a85bad5-1726-4e93-99fb-1fc4c106c296 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.770 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":113,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:33.586: INFO: Only supported for providers [azure] (not aws)
... skipping 57 lines ...
STEP: Destroying namespace "services-4134" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":8,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:34.831: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":9,"skipped":19,"failed":0}
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:17:17.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 31 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:36.212: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
May 22 07:16:02.855: INFO: Terminating ReplicationController up-down-1 pods took: 100.702785ms
STEP: verifying service up-down-1 is not up
May 22 07:16:13.523: INFO: Creating new host exec pod
May 22 07:16:13.840: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 22 07:16:15.997: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 22 07:16:18.002: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 22 07:16:18.002: INFO: Running '/tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8940 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.136.44:80 && echo service-down-failed'
May 22 07:16:21.608: INFO: rc: 28
May 22 07:16:21.608: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.136.44:80 && echo service-down-failed" in pod services-8940/verify-service-down-host-exec-pod: error running /tmp/kubectl1475549380/kubectl --server=https://api.e2e-6ff5930a1f-cb70c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8940 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.136.44:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.69.136.44:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8940
STEP: verifying service up-down-2 is still up
May 22 07:16:21.775: INFO: Creating new host exec pod
May 22 07:16:22.145: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
... skipping 73 lines ...
• [SLOW TEST:138.767 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":5,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:37.173: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 117 lines ...
• [SLOW TEST:9.528 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":16,"skipped":140,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:39.810: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 76 lines ...
May 22 07:17:19.768: INFO: >>> kubeConfig: /root/.kube/config
May 22 07:17:25.838: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323
STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323
May 22 07:17:25.839: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.20.49.129 http://127.0.0.1:54323/hostname] Namespace:hostport-5398 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 22 07:17:25.839: INFO: >>> kubeConfig: /root/.kube/config
May 22 07:17:31.862: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323
May 22 07:17:31.862: FAIL: Failed to connect to exposed host ports

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0024dc600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0024dc600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
... skipping 253 lines ...
• Failure [51.289 seconds]
[sig-network] HostPort
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  May 22 07:17:31.862: Failed to connect to exposed host ports

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":81,"failed":1,"failures":["[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
May 22 07:17:29.812: INFO: PersistentVolumeClaim pvc-rjp4h found but phase is Pending instead of Bound.
May 22 07:17:31.972: INFO: PersistentVolumeClaim pvc-rjp4h found and phase=Bound (8.788376383s)
May 22 07:17:31.972: INFO: Waiting up to 3m0s for PersistentVolume local-rbrgp to have phase Bound
May 22 07:17:32.128: INFO: PersistentVolume local-rbrgp found and phase=Bound (156.324687ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xr2j
STEP: Creating a pod to test subpath
May 22 07:17:32.600: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xr2j" in namespace "provisioning-5638" to be "Succeeded or Failed"
May 22 07:17:32.756: INFO: Pod "pod-subpath-test-preprovisionedpv-xr2j": Phase="Pending", Reason="", readiness=false. Elapsed: 156.376185ms
May 22 07:17:34.914: INFO: Pod "pod-subpath-test-preprovisionedpv-xr2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313950233s
May 22 07:17:37.071: INFO: Pod "pod-subpath-test-preprovisionedpv-xr2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470900187s
May 22 07:17:39.229: INFO: Pod "pod-subpath-test-preprovisionedpv-xr2j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629195769s
STEP: Saw pod success
May 22 07:17:39.229: INFO: Pod "pod-subpath-test-preprovisionedpv-xr2j" satisfied condition "Succeeded or Failed"
May 22 07:17:39.386: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-xr2j container test-container-subpath-preprovisionedpv-xr2j: <nil>
STEP: delete the pod
May 22 07:17:39.707: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xr2j to disappear
May 22 07:17:39.866: INFO: Pod pod-subpath-test-preprovisionedpv-xr2j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xr2j
May 22 07:17:39.866: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xr2j" in namespace "provisioning-5638"
... skipping 141 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-2cf09f2d-2bbd-447e-967b-0c2ceac4fd88
STEP: Creating a pod to test consume configMaps
May 22 07:17:34.747: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64" in namespace "configmap-3109" to be "Succeeded or Failed"
May 22 07:17:34.907: INFO: Pod "pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64": Phase="Pending", Reason="", readiness=false. Elapsed: 160.206561ms
May 22 07:17:37.068: INFO: Pod "pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320915992s
May 22 07:17:39.229: INFO: Pod "pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482217768s
May 22 07:17:41.389: INFO: Pod "pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.642607396s
STEP: Saw pod success
May 22 07:17:41.390: INFO: Pod "pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64" satisfied condition "Succeeded or Failed"
May 22 07:17:41.550: INFO: Trying to get logs from node ip-172-20-48-92.ap-northeast-2.compute.internal pod pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64 container agnhost-container: <nil>
STEP: delete the pod
May 22 07:17:42.223: INFO: Waiting for pod pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64 to disappear
May 22 07:17:42.383: INFO: Pod pod-configmaps-e6ed7f81-de4d-4df3-8932-e51d1321dc64 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 126 lines ...
• [SLOW TEST:9.509 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":9,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:17:41.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
May 22 07:17:42.490: INFO: Waiting up to 5m0s for pod "pod-26b6712a-ed69-4aee-b04f-20c6db57f2cb" in namespace "emptydir-3968" to be "Succeeded or Failed"
May 22 07:17:42.652: INFO: Pod "pod-26b6712a-ed69-4aee-b04f-20c6db57f2cb": Phase="Pending", Reason="", readiness=false. Elapsed: 161.799916ms
May 22 07:17:44.832: INFO: Pod "pod-26b6712a-ed69-4aee-b04f-20c6db57f2cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341615675s
May 22 07:17:46.995: INFO: Pod "pod-26b6712a-ed69-4aee-b04f-20c6db57f2cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.504299387s
STEP: Saw pod success
May 22 07:17:46.995: INFO: Pod "pod-26b6712a-ed69-4aee-b04f-20c6db57f2cb" satisfied condition "Succeeded or Failed"
May 22 07:17:47.155: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-26b6712a-ed69-4aee-b04f-20c6db57f2cb container test-container: <nil>
STEP: delete the pod
May 22 07:17:47.733: INFO: Waiting for pod pod-26b6712a-ed69-4aee-b04f-20c6db57f2cb to disappear
May 22 07:17:47.893: INFO: Pod pod-26b6712a-ed69-4aee-b04f-20c6db57f2cb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.693 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":82,"failed":1,"failures":["[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May 22 07:17:48.238: INFO: Only supported for providers [gce gke] (not aws)
... skipping 58 lines ...
      Driver "csi-hostpath" does not support FsGroup - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":121,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:17:42.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 87 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":36,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 22 07:17:42.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-8f347b24-cd49-4dbf-a8f4-36a46f1a1c08
STEP: Creating a pod to test consume secrets
May 22 07:17:43.146: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7" in namespace "projected-9250" to be "Succeeded or Failed"
May 22 07:17:43.323: INFO: Pod "pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7": Phase="Pending", Reason="", readiness=false. Elapsed: 177.516125ms
May 22 07:17:45.483: INFO: Pod "pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336763085s
May 22 07:17:47.640: INFO: Pod "pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494068861s
May 22 07:17:49.797: INFO: Pod "pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.651256262s
STEP: Saw pod success
May 22 07:17:49.797: INFO: Pod "pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7" satisfied condition "Succeeded or Failed"
May 22 07:17:49.954: INFO: Trying to get logs from node ip-172-20-49-129.ap-northeast-2.compute.internal pod pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 22 07:17:50.272: INFO: Waiting for pod pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7 to disappear
May 22 07:17:50.428: INFO: Pod pod-projected-secrets-238ce066-bf5c-4182-bbe0-c2d9d7c675c7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.714 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27512 lines ...






\" object=\"deployment-6893/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-dd75g\"\nI0522 07:23:45.100156       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-847dcfb7fb-dd75g\" objectUID=d740e4c8-8370-4de5-9002-56ee5656a2c2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0522 07:23:45.590313       1 tokens_controller.go:262] error synchronizing serviceaccount projected-1196/default: serviceaccounts \"default\" not found\nI0522 07:23:45.761709       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-22 07:22:17 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcr\",\n  InstanceId: \"i-0b9275cce37678aeb\",\n  State: \"detaching\",\n  VolumeId: \"vol-0eeab69774f3b3613\"\n}\nI0522 07:23:45.761753       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"aws-z2fh5\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0eeab69774f3b3613\") on node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:23:46.072042       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0522 07:23:46.222337       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0522 07:23:46.322318       1 namespace_controller.go:185] Namespace has been deleted security-context-6992\nI0522 07:23:46.487352       1 pv_controller.go:879] volume \"local-pvvnmwq\" entered phase \"Available\"\nI0522 07:23:46.642232       1 pv_controller.go:930] claim \"persistent-local-volumes-test-9979/pvc-779sb\" bound to volume \"local-pvvnmwq\"\nI0522 07:23:46.648531       1 pv_controller.go:879] volume \"local-pvvnmwq\" entered phase \"Bound\"\nI0522 07:23:46.648554       1 pv_controller.go:982] volume \"local-pvvnmwq\" bound to claim \"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:23:46.658529       1 pv_controller.go:823] claim \"persistent-local-volumes-test-9979/pvc-779sb\" entered phase \"Bound\"\nI0522 07:23:46.796969       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-847dcfb7fb\" need=1 deleting=1\nI0522 07:23:46.797000       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5]\nI0522 07:23:46.797520       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-6893/webserver-847dcfb7fb-l6xs9\"\nI0522 07:23:46.798438       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 1\"\nI0522 07:23:46.808270       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:23:46.814926       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-847dcfb7fb-l6xs9\" objectUID=30dcb70a-229a-4d5a-845c-5a5eaf07314e kind=\"CiliumEndpoint\" virtual=false\nI0522 07:23:46.815417       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-l6xs9\"\nI0522 07:23:46.820285       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-847dcfb7fb-l6xs9\" objectUID=30dcb70a-229a-4d5a-845c-5a5eaf07314e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:23:47.062281       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-3842/pvc-vlgvv\"\nI0522 07:23:47.066737       1 pv_controller.go:640] volume \"aws-6xkch\" is released and reclaim policy \"Retain\" will be executed\nI0522 07:23:47.069340       1 pv_controller.go:879] volume \"aws-6xkch\" entered phase \"Released\"\nI0522 07:23:47.225730       1 pv_controller_base.go:505] deletion of claim \"volumemode-3842/pvc-vlgvv\" was already processed\nI0522 07:23:47.280676       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"aws-6xkch\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0590d34d6da66fd40\") on node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" \nE0522 07:23:47.546766       1 tokens_controller.go:262] error synchronizing serviceaccount volume-1653/default: secrets \"default-token-kqxr6\" is forbidden: unable to create new content in namespace volume-1653 because it is being terminated\nI0522 07:23:47.582294       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=9 creating=2\nI0522 07:23:47.586001       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-99f7796d5 to 9\"\nI0522 07:23:47.587688       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-wtqnh\"\nI0522 07:23:47.606409       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-84ftv\"\nI0522 07:23:47.619621       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-99f7796d5\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:23:47.625256       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 8\"\nI0522 07:23:47.641199       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=8 deleting=1\nI0522 07:23:47.641251       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5]\nI0522 07:23:47.641364       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-wtqnh\"\nI0522 07:23:47.678951       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-wtqnh\"\nI0522 07:23:47.884445       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [crd-publish-openapi-test-multi-ver.example.com/v3, Resource=e2e-test-crd-publish-openapi-3832-crds], removed: []\nI0522 07:23:47.887983       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-3832-crds.crd-publish-openapi-test-multi-ver.example.com\nI0522 07:23:47.888071       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0522 07:23:47.888251       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (19h34m32.649212209s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0522 07:23:47.974136       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd-publish-openapi-test-multi-ver.example.com/v3, Resource=e2e-test-crd-publish-openapi-3832-crds], removed: []\nI0522 07:23:47.989928       1 shared_informer.go:247] Caches are synced for resource quota \nI0522 07:23:47.989949       1 resource_quota_controller.go:454] synced quota controller\nI0522 07:23:48.032871       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0522 07:23:48.032973       1 shared_informer.go:247] Caches are synced for garbage collector \nI0522 07:23:48.033015       1 garbagecollector.go:254] synced garbage collector\nE0522 07:23:48.331125       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:23:48.387769       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-847dcfb7fb\" need=0 deleting=1\nI0522 07:23:48.387811       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-847dcfb7fb\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5]\nI0522 07:23:48.387898       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-847dcfb7fb\" pod=\"deployment-6893/webserver-847dcfb7fb-qqzxh\"\nI0522 07:23:48.389125       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-847dcfb7fb to 0\"\nI0522 07:23:48.408118       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-847dcfb7fb-qqzxh\"\nI0522 07:23:48.408359       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-847dcfb7fb-qqzxh\" objectUID=672d1282-8274-424b-b285-cb85ffa72ede kind=\"CiliumEndpoint\" virtual=false\nI0522 07:23:48.422646       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-847dcfb7fb-qqzxh\" objectUID=672d1282-8274-424b-b285-cb85ffa72ede kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:23:48.428512       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0522 07:23:48.566019       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nE0522 07:23:48.727510       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nE0522 07:23:49.108433       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-9690/pvc-sdks2: storageclass.storage.k8s.io \"provisioning-9690\" not found\nI0522 07:23:49.108668       1 event.go:291] \"Event occurred\" object=\"provisioning-9690/pvc-sdks2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9690\\\" not found\"\nE0522 07:23:49.249035       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nI0522 07:23:49.275782       1 pv_controller.go:879] volume \"local-crr7r\" entered phase \"Available\"\nI0522 07:23:49.322088       1 garbagecollector.go:471] \"Processing object\" object=\"services-2490/pause-pod-565dfd4d86\" objectUID=264cb255-bceb-41cf-9311-c9ad49832967 kind=\"ReplicaSet\" virtual=false\nI0522 07:23:49.322131       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"services-2490/pause-pod\"\nI0522 07:23:49.325989       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2490/pause-pod-565dfd4d86\" objectUID=264cb255-bceb-41cf-9311-c9ad49832967 kind=\"ReplicaSet\" propagationPolicy=Background\nI0522 07:23:49.330015       1 garbagecollector.go:471] \"Processing object\" object=\"services-2490/pause-pod-565dfd4d86-tzs5j\" objectUID=6559ca46-7f98-4404-b2a2-2208ecf81b12 kind=\"Pod\" virtual=false\nI0522 07:23:49.330243       1 garbagecollector.go:471] \"Processing object\" object=\"services-2490/pause-pod-565dfd4d86-25zwd\" objectUID=f3dbbb11-efad-458e-9bf0-84cd824539e4 kind=\"Pod\" virtual=false\nI0522 07:23:49.334170       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2490/pause-pod-565dfd4d86-tzs5j\" objectUID=6559ca46-7f98-4404-b2a2-2208ecf81b12 kind=\"Pod\" propagationPolicy=Background\nI0522 07:23:49.334501       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2490/pause-pod-565dfd4d86-25zwd\" objectUID=f3dbbb11-efad-458e-9bf0-84cd824539e4 kind=\"Pod\" propagationPolicy=Background\nI0522 07:23:49.378644       1 garbagecollector.go:471] \"Processing object\" object=\"services-2490/pause-pod-565dfd4d86-tzs5j\" objectUID=e43d76e1-6e6c-4c0e-938b-8cb2f420b1b7 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:23:49.386913       1 garbagecollector.go:471] \"Processing object\" object=\"services-2490/pause-pod-565dfd4d86-25zwd\" objectUID=a71d6b2e-b7e0-4d57-a31b-3b0196b30f4a kind=\"CiliumEndpoint\" virtual=false\nI0522 07:23:49.391626       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2490/pause-pod-565dfd4d86-tzs5j\" objectUID=e43d76e1-6e6c-4c0e-938b-8cb2f420b1b7 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:23:49.395305       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2490/pause-pod-565dfd4d86-25zwd\" objectUID=a71d6b2e-b7e0-4d57-a31b-3b0196b30f4a kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:23:49.437092       1 pv_controller.go:879] volume \"hostpath-72787\" entered phase \"Available\"\nI0522 07:23:49.505756       1 garbagecollector.go:471] \"Processing object\" object=\"services-2490/echo-sourceip\" objectUID=b33b45da-b572-4673-8f89-8c8ba0e35686 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:23:49.518096       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"services-2490/sourceip-test\" err=\"Operation cannot be fulfilled on endpoints \\\"sourceip-test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nW0522 07:23:49.518207       1 endpointslice_controller.go:305] Error syncing endpoint slices for service \"services-2490/sourceip-test\", retrying. Error: EndpointSlice informer cache is out of date\nI0522 07:23:49.518282       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2490/echo-sourceip\" objectUID=b33b45da-b572-4673-8f89-8c8ba0e35686 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:23:49.518799       1 event.go:291] \"Event occurred\" object=\"services-2490/sourceip-test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-2490/sourceip-test: Operation cannot be fulfilled on endpoints \\\"sourceip-test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0522 07:23:49.631889       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nI0522 07:23:49.661179       1 garbagecollector.go:471] \"Processing object\" object=\"services-2490/sourceip-test-dbxzx\" objectUID=37219419-7618-450a-8178-6de1f9ac0532 kind=\"EndpointSlice\" virtual=false\nI0522 07:23:49.667132       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2490/sourceip-test-dbxzx\" objectUID=37219419-7618-450a-8178-6de1f9ac0532 kind=\"EndpointSlice\" propagationPolicy=Background\nE0522 07:23:49.827432       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nI0522 07:23:50.026041       1 namespace_controller.go:185] Namespace has been deleted events-253\nE0522 07:23:50.061950       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nE0522 07:23:50.338396       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nI0522 07:23:50.360300       1 event.go:291] \"Event occurred\" object=\"deployment-5794/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rollover-deployment-78bc8b888c to 1\"\nI0522 07:23:50.360629       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5794/test-rollover-deployment-78bc8b888c\" need=1 creating=1\nI0522 07:23:50.365888       1 event.go:291] \"Event occurred\" object=\"deployment-5794/test-rollover-deployment-78bc8b888c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-deployment-78bc8b888c-fplw4\"\nI0522 07:23:50.379664       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5794/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:23:50.718767       1 namespace_controller.go:185] Namespace has been deleted projected-1196\nE0522 07:23:50.767189       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nI0522 07:23:51.131169       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2795/aws9phnd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0522 07:23:51.202145       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-4859/default: secrets \"default-token-7fcd8\" is forbidden: unable to create new content in namespace secrets-4859 because it is being terminated\nI0522 07:23:51.457230       1 namespace_controller.go:185] Namespace has been deleted ephemeral-936-9037\nE0522 07:23:51.547615       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nE0522 07:23:51.592231       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:23:51.633541       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-5794/test-rollover-deployment-78bc8b888c\" need=0 deleting=1\nI0522 07:23:51.633749       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-5794/test-rollover-deployment-78bc8b888c\" relatedReplicaSets=[test-rollover-controller test-rollover-deployment-78bc8b888c test-rollover-deployment-98c5f4599]\nI0522 07:23:51.633984       1 controller_utils.go:602] \"Deleting pod\" controller=\"test-rollover-deployment-78bc8b888c\" pod=\"deployment-5794/test-rollover-deployment-78bc8b888c-fplw4\"\nI0522 07:23:51.633824       1 event.go:291] \"Event occurred\" object=\"deployment-5794/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rollover-deployment-78bc8b888c to 0\"\nI0522 07:23:51.642557       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5794/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:23:51.648530       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5794/test-rollover-deployment-98c5f4599\" need=1 creating=1\nI0522 07:23:51.649281       1 event.go:291] \"Event occurred\" object=\"deployment-5794/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rollover-deployment-98c5f4599 to 1\"\nI0522 07:23:51.655076       1 event.go:291] \"Event occurred\" object=\"deployment-5794/test-rollover-deployment-78bc8b888c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rollover-deployment-78bc8b888c-fplw4\"\nI0522 07:23:51.667807       1 event.go:291] \"Event occurred\" object=\"deployment-5794/test-rollover-deployment-98c5f4599\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-deployment-98c5f4599-z46zf\"\nI0522 07:23:51.684892       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5794/test-rollover-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rollover-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0522 07:23:51.775909       1 tokens_controller.go:262] error synchronizing serviceaccount volume-796/default: secrets \"default-token-2nq8k\" is forbidden: unable to create new content in namespace volume-796 because it is being terminated\nI0522 07:23:51.782138       1 namespace_controller.go:185] Namespace has been deleted volume-5189\nI0522 07:23:52.028526       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8183/pvc-c8ld7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8183\\\" or manually created by system administrator\"\nI0522 07:23:52.028553       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8183/pvc-c8ld7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8183\\\" or manually created by system administrator\"\nI0522 07:23:52.040071       1 pv_controller.go:879] volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" entered phase \"Bound\"\nI0522 07:23:52.040101       1 pv_controller.go:982] volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" bound to claim \"csi-mock-volumes-8183/pvc-c8ld7\"\nI0522 07:23:52.046284       1 pv_controller.go:823] claim \"csi-mock-volumes-8183/pvc-c8ld7\" entered phase \"Bound\"\nI0522 07:23:52.687980       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8183^4\") from node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nE0522 07:23:52.765427       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0522 07:23:52.962852       1 namespace_controller.go:162] deletion of namespace configmap-6785 failed: unexpected items still remain in namespace: configmap-6785 for gvr: /v1, Resource=pods\nI0522 07:23:53.261982       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8183^4\") from node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nI0522 07:23:53.262238       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8183/pvc-volume-tester-6fskw\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\\\" \"\nI0522 07:23:53.359585       1 namespace_controller.go:185] Namespace has been deleted volume-1653\nI0522 07:23:53.692884       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-8909/awsml7mt\"\nI0522 07:23:53.699963       1 pv_controller.go:640] volume \"pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54\" is released and reclaim policy \"Delete\" will be executed\nI0522 07:23:53.702851       1 pv_controller.go:879] volume \"pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54\" entered phase \"Released\"\nI0522 07:23:53.705322       1 pv_controller.go:1341] isVolumeReleased[pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54]: volume is released\nI0522 07:23:53.869410       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-041c9c58ddea47a3b: error deleting EBS volume \"vol-041c9c58ddea47a3b\" since volume is currently attached to \"i-0b9275cce37678aeb\"\nE0522 07:23:53.870315       1 goroutinemap.go:150] Operation for \"delete-pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54[4487ddbc-5dba-4d4e-b9e2-45b74b841661]\" failed. No retries permitted until 2021-05-22 07:23:54.370292627 +0000 UTC m=+1020.036058531 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-041c9c58ddea47a3b\\\" since volume is currently attached to \\\"i-0b9275cce37678aeb\\\"\"\nI0522 07:23:53.870565       1 event.go:291] \"Event occurred\" object=\"pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-041c9c58ddea47a3b\\\" since volume is currently attached to \\\"i-0b9275cce37678aeb\\\"\"\nI0522 07:23:53.910490       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-041c9c58ddea47a3b\") on node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:23:53.913319       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-041c9c58ddea47a3b\") on node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:23:54.552834       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-145-5587/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0522 07:23:54.789803       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"svc-latency-8527/svc-latency-rc\" need=1 creating=1\nI0522 07:23:54.798122       1 event.go:291] \"Event occurred\" object=\"svc-latency-8527/svc-latency-rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: svc-latency-rc-clss9\"\nI0522 07:23:54.858135       1 replica_set.go:449] ReplicaSet \"test-rollover-deployment-98c5f4599\" will be enqueued after 10s for availability check\nE0522 07:23:55.410328       1 tokens_controller.go:262] error synchronizing serviceaccount pv-protection-8385/default: secrets \"default-token-pwkkv\" is forbidden: unable to create new content in namespace pv-protection-8385 because it is being terminated\nE0522 07:23:55.615052       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:23:56.272739       1 namespace_controller.go:185] Namespace has been deleted secrets-4859\nI0522 07:23:56.829297       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-0b4df4055e38a01c1\nI0522 07:23:56.859281       1 namespace_controller.go:185] Namespace has been deleted volume-796\nI0522 07:23:56.876740       1 pv_controller.go:1677] volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" provisioned for claim \"fsgroupchangepolicy-2795/aws9phnd\"\nI0522 07:23:56.877023       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2795/aws9phnd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa using kubernetes.io/aws-ebs\"\nI0522 07:23:56.882232       1 pv_controller.go:879] volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" entered phase \"Bound\"\nI0522 07:23:56.882262       1 pv_controller.go:982] volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" bound to claim \"fsgroupchangepolicy-2795/aws9phnd\"\nI0522 07:23:56.888633       1 pv_controller.go:823] claim \"fsgroupchangepolicy-2795/aws9phnd\" entered phase \"Bound\"\nE0522 07:23:57.357717       1 tokens_controller.go:262] error synchronizing serviceaccount tables-8690/default: secrets \"default-token-wxgkc\" is forbidden: unable to create new content in namespace tables-8690 because it is being terminated\nI0522 07:23:57.540852       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0b4df4055e38a01c1\") from node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" \nI0522 07:23:57.582718       1 aws.go:2014] Assigned mount device ct -> volume vol-0b4df4055e38a01c1\nI0522 07:23:57.807821       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 7\"\nI0522 07:23:57.808223       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=7 deleting=1\nI0522 07:23:57.808394       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5]\nI0522 07:23:57.808546       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-84ftv\"\nI0522 07:23:57.830083       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=2 creating=2\nI0522 07:23:57.833885       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 2\"\nI0522 07:23:57.839289       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-84ftv\" objectUID=e49b5569-37f3-4d27-9695-ddb8ed57c8d0 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:23:57.850943       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-84ftv\"\nI0522 07:23:57.855778       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:23:57.855959       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-84ftv\" objectUID=e49b5569-37f3-4d27-9695-ddb8ed57c8d0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:23:57.856175       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-n889f\"\nI0522 07:23:57.866447       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-5qjr9\"\nI0522 07:23:57.903177       1 aws.go:2427] AttachVolume volume=\"vol-0b4df4055e38a01c1\" instance=\"i-094127254d58b1025\" request returned {\n  AttachTime: 2021-05-22 07:23:57.889 +0000 UTC,\n  Device: \"/dev/xvdct\",\n  InstanceId: \"i-094127254d58b1025\",\n  State: \"attaching\",\n  VolumeId: \"vol-0b4df4055e38a01c1\"\n}\nI0522 07:23:57.967993       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=6 deleting=1\nI0522 07:23:57.968690       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:23:57.969047       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-sjhvf\"\nI0522 07:23:57.968657       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 6\"\nI0522 07:23:58.030697       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-sjhvf\" objectUID=7eaea5a2-e5d8-4a16-aa2f-ee4870718c4b kind=\"CiliumEndpoint\" virtual=false\nI0522 07:23:58.034275       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-sjhvf\"\nI0522 07:23:58.044462       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=3 creating=1\nI0522 07:23:58.050578       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-sjhvf\" objectUID=7eaea5a2-e5d8-4a16-aa2f-ee4870718c4b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:23:58.055883       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 3\"\nI0522 07:23:58.089930       1 namespace_controller.go:185] Namespace has been deleted volumemode-3842\nI0522 07:23:58.090825       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-zhwwg\"\nI0522 07:23:58.136240       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:23:58.180939       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=3 creating=1\nI0522 07:23:58.199078       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-t6zjr\"\nI0522 07:23:58.499901       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9979/pod-1dd4b10e-685f-4261-a74c-9c3ce396e0e5\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:23:58.501282       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:23:58.613020       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9979/pod-1dd4b10e-685f-4261-a74c-9c3ce396e0e5\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:23:58.613043       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:23:59.293654       1 aws.go:2291] Waiting for volume \"vol-041c9c58ddea47a3b\" state: actual=detaching, desired=detached\nI0522 07:23:59.877285       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=5 deleting=1\nI0522 07:23:59.877828       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:23:59.877793       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 5\"\nI0522 07:23:59.878130       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-srbbl\"\nI0522 07:23:59.891415       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-58477d78f9\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:23:59.897253       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=4 creating=1\nI0522 07:23:59.900331       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 4\"\nI0522 07:23:59.903569       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-srbbl\" objectUID=bb33f952-5846-4493-8af1-e29f7a87b049 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:23:59.908161       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-srbbl\"\nI0522 07:23:59.920674       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-sfdhs\"\nI0522 07:23:59.939295       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-srbbl\" objectUID=bb33f952-5846-4493-8af1-e29f7a87b049 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:23:59.967828       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 4\"\nI0522 07:23:59.975612       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=4 deleting=1\nI0522 07:23:59.976036       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:23:59.976271       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-5dsr8\"\nI0522 07:23:59.982394       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=5 creating=1\nI0522 07:23:59.982674       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 5\"\nI0522 07:23:59.989394       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-t7q78\"\nI0522 07:23:59.996358       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-5dsr8\" objectUID=e49416ae-ca02-4883-99cd-f578b7298631 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:00.007747       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-5dsr8\"\nI0522 07:24:00.016963       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-5dsr8\" objectUID=e49416ae-ca02-4883-99cd-f578b7298631 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:00.017201       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:00.031677       1 aws.go:2037] Releasing in-process attachment entry: ct -> volume vol-0b4df4055e38a01c1\nI0522 07:24:00.031719       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0b4df4055e38a01c1\") from node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" \nI0522 07:24:00.031768       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2795/pod-8c766708-0ea4-4088-8fdd-97041657992e\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\\\" \"\nI0522 07:24:00.112903       1 event.go:291] \"Event occurred\" object=\"cronjob-9483/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27027804\"\nI0522 07:24:00.122059       1 cronjob_controllerv2.go:193] \"error cleaning up jobs\" cronjob=\"cronjob-9483/concurrent\" resourceVersion=\"31085\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"concurrent\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0522 07:24:00.122078       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-9483/concurrent, requeuing: Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0522 07:24:00.125313       1 event.go:291] \"Event occurred\" object=\"cronjob-9483/concurrent-27027804\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27027804-ck75x\"\nE0522 07:24:00.227564       1 tokens_controller.go:262] error synchronizing serviceaccount svcaccounts-6087/default: secrets \"default-token-f6drx\" is forbidden: unable to create new content in namespace svcaccounts-6087 because it is being terminated\nI0522 07:24:00.293340       1 namespace_controller.go:185] Namespace has been deleted services-2490\nI0522 07:24:00.582608       1 namespace_controller.go:185] Namespace has been deleted pv-protection-8385\nI0522 07:24:00.723590       1 namespace_controller.go:185] Namespace has been deleted configmap-6785\nI0522 07:24:00.768733       1 namespace_controller.go:185] Namespace has been deleted pods-4740\nI0522 07:24:00.813051       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9979/pod-1dd4b10e-685f-4261-a74c-9c3ce396e0e5\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:00.813077       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:00.898343       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=3 deleting=1\nI0522 07:24:00.898541       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:00.899050       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-dm85k\"\nI0522 07:24:00.898919       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 3\"\nI0522 07:24:00.912315       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-dm85k\" objectUID=4dbc1050-d767-432f-a07f-a26c7f77331d kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:00.917009       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=6 creating=1\nI0522 07:24:00.917415       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-dm85k\" objectUID=4dbc1050-d767-432f-a07f-a26c7f77331d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:00.918051       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-dm85k\"\nI0522 07:24:00.918071       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 6\"\nI0522 07:24:00.929818       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-bsknm\"\nI0522 07:24:00.940388       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:01.023324       1 garbagecollector.go:471] \"Processing object\" object=\"pods-4790/fooservice-7wlq7\" objectUID=4de4aae7-5047-4d0e-a1ff-8778b5c3b655 kind=\"EndpointSlice\" virtual=false\nI0522 07:24:01.025050       1 garbagecollector.go:580] \"Deleting object\" object=\"pods-4790/fooservice-7wlq7\" objectUID=4de4aae7-5047-4d0e-a1ff-8778b5c3b655 kind=\"EndpointSlice\" propagationPolicy=Background\nI0522 07:24:01.135147       1 pv_controller.go:930] claim \"provisioning-9690/pvc-sdks2\" bound to volume \"local-crr7r\"\nI0522 07:24:01.145503       1 pv_controller.go:1341] isVolumeReleased[pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54]: volume is released\nI0522 07:24:01.161335       1 pv_controller.go:879] volume \"local-crr7r\" entered phase \"Bound\"\nI0522 07:24:01.161769       1 pv_controller.go:982] volume \"local-crr7r\" bound to claim \"provisioning-9690/pvc-sdks2\"\nI0522 07:24:01.177750       1 pv_controller.go:823] claim \"provisioning-9690/pvc-sdks2\" entered phase \"Bound\"\nE0522 07:24:01.178367       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-2621/inline-volume-fll82-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0522 07:24:01.178527       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621/inline-volume-fll82-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0522 07:24:01.330104       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-northeast-2a/vol-041c9c58ddea47a3b\nI0522 07:24:01.330130       1 pv_controller.go:1436] volume \"pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54\" deleted\nI0522 07:24:01.338824       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-8909/awsml7mt\" was already processed\nI0522 07:24:01.354261       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-22 07:23:32 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcz\",\n  InstanceId: \"i-0b9275cce37678aeb\",\n  State: \"detaching\",\n  VolumeId: \"vol-041c9c58ddea47a3b\"\n}\nI0522 07:24:01.354302       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-eaa0a76f-0b33-4eba-8a03-56b0c5491e54\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-041c9c58ddea47a3b\") on node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:01.537136       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 2\"\nI0522 07:24:01.544834       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-99f7796d5\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:01.553604       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=3 deleting=3\nI0522 07:24:01.555144       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-58477d78f9\" relatedReplicaSets=[webserver-58477d78f9 webserver-847dcfb7fb webserver-99f7796d5]\nI0522 07:24:01.553982       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-58477d78f9 to 3\"\nI0522 07:24:01.555473       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-58477d78f9\" pod=\"deployment-6893/webserver-58477d78f9-t7q78\"\nI0522 07:24:01.555488       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-58477d78f9\" pod=\"deployment-6893/webserver-58477d78f9-bsknm\"\nI0522 07:24:01.555526       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-58477d78f9\" pod=\"deployment-6893/webserver-58477d78f9-sfdhs\"\nI0522 07:24:01.558568       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:01.565275       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=6 creating=3\nI0522 07:24:01.565786       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-99f7796d5 to 6\"\nI0522 07:24:01.578517       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-t7q78\" objectUID=a3f81eed-ee9d-4096-83c9-84a5e568da10 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:01.579169       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-sfdhs\" objectUID=f405b544-265b-43d2-8d14-d22be658d061 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:01.580611       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-bsknm\" objectUID=851c14be-01e9-4e9a-92a8-1e952a7eb56d kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:01.581262       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-58477d78f9-t7q78\"\nI0522 07:24:01.581289       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-gtpqm\"\nI0522 07:24:01.581302       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-58477d78f9-bsknm\"\nI0522 07:24:01.581315       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-58477d78f9-sfdhs\"\nI0522 07:24:01.597506       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-bsknm\" objectUID=851c14be-01e9-4e9a-92a8-1e952a7eb56d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:01.598416       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-t7q78\" objectUID=a3f81eed-ee9d-4096-83c9-84a5e568da10 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:01.598635       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-sfdhs\" objectUID=f405b544-265b-43d2-8d14-d22be658d061 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:01.606705       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-clfxj\"\nI0522 07:24:01.606985       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-lsxvq\"\nI0522 07:24:01.643150       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9979/pod-1dd4b10e-685f-4261-a74c-9c3ce396e0e5\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:01.643172       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:01.652561       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9979/pod-1dd4b10e-685f-4261-a74c-9c3ce396e0e5\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:01.652581       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:01.657722       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-2621, name: inline-volume-fll82, uid: 0201a9b3-d44c-4710-90aa-396fe83c3159] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0522 07:24:01.658643       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-fll82-my-volume\" objectUID=f9028200-bdd5-4dea-98ac-6fad8cbfff14 kind=\"PersistentVolumeClaim\" virtual=false\nI0522 07:24:01.658915       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-fll82\" objectUID=0201a9b3-d44c-4710-90aa-396fe83c3159 kind=\"Pod\" virtual=false\nI0522 07:24:01.662825       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-2621, name: inline-volume-fll82-my-volume, uid: f9028200-bdd5-4dea-98ac-6fad8cbfff14] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-2621, name: inline-volume-fll82, uid: 0201a9b3-d44c-4710-90aa-396fe83c3159] is deletingDependents\nI0522 07:24:01.665990       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2621/inline-volume-fll82-my-volume\" objectUID=f9028200-bdd5-4dea-98ac-6fad8cbfff14 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0522 07:24:01.676714       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-fll82-my-volume\" objectUID=f9028200-bdd5-4dea-98ac-6fad8cbfff14 kind=\"PersistentVolumeClaim\" virtual=false\nE0522 07:24:01.677384       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-2621/inline-volume-fll82-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0522 07:24:01.678027       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621/inline-volume-fll82-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0522 07:24:01.680959       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-2621/inline-volume-fll82-my-volume\"\nI0522 07:24:01.684375       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-fll82\" objectUID=0201a9b3-d44c-4710-90aa-396fe83c3159 kind=\"Pod\" virtual=false\nI0522 07:24:01.685756       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-2621, name: inline-volume-fll82, uid: 0201a9b3-d44c-4710-90aa-396fe83c3159]\nI0522 07:24:01.859507       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=3 creating=1\nI0522 07:24:01.869189       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-c2xph\"\nI0522 07:24:01.872960       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-t6zjr\" objectUID=c81e4204-cb16-4fff-8f7c-209cdf4eb30b kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:01.874065       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7768-3398/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0522 07:24:01.890107       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-t6zjr\" objectUID=c81e4204-cb16-4fff-8f7c-209cdf4eb30b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:02.064487       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=6 creating=1\nI0522 07:24:02.070560       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-8qr6s\"\nI0522 07:24:02.242541       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=6 creating=1\nI0522 07:24:02.250626       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-6rk4l\"\nI0522 07:24:02.402630       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=6 creating=1\nI0522 07:24:02.407185       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-sl5kg\" objectUID=6c95dae2-8a3b-439a-a483-ccf6a688959a kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:02.416072       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-sl5kg\" objectUID=6c95dae2-8a3b-439a-a483-ccf6a688959a kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:02.417541       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-xbwh7\"\nI0522 07:24:02.443128       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:02.458279       1 event.go:291] \"Event occurred\" object=\"cronjob-9483/concurrent-27027804\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0522 07:24:02.466593       1 event.go:291] \"Event occurred\" object=\"cronjob-9483/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: concurrent-27027804, status: Complete\"\nI0522 07:24:02.485015       1 namespace_controller.go:185] Namespace has been deleted tables-8690\nI0522 07:24:02.485896       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-145/pvc-7w4l7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-145\\\" or manually created by system administrator\"\nI0522 07:24:02.502902       1 pv_controller.go:879] volume \"pvc-c5e4697d-54a5-45c2-a5fe-f9ed7acf8391\" entered phase \"Bound\"\nI0522 07:24:02.503161       1 pv_controller.go:982] volume \"pvc-c5e4697d-54a5-45c2-a5fe-f9ed7acf8391\" bound to claim \"csi-mock-volumes-145/pvc-7w4l7\"\nI0522 07:24:02.519729       1 pv_controller.go:823] claim \"csi-mock-volumes-145/pvc-7w4l7\" entered phase \"Bound\"\nI0522 07:24:02.579927       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=6 creating=1\nI0522 07:24:02.589139       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-z572z\" objectUID=174bbdf3-ee79-48d5-bd35-6c7b584161aa kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:02.598477       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-rflg4\"\nI0522 07:24:02.609569       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-z572z\" objectUID=174bbdf3-ee79-48d5-bd35-6c7b584161aa kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:02.768902       1 pv_controller.go:879] volume \"local-pvbwpk4\" entered phase \"Available\"\nI0522 07:24:02.921984       1 pv_controller.go:930] claim \"persistent-local-volumes-test-2562/pvc-n67pq\" bound to volume \"local-pvbwpk4\"\nI0522 07:24:02.930997       1 pv_controller.go:879] volume \"local-pvbwpk4\" entered phase \"Bound\"\nI0522 07:24:02.931021       1 pv_controller.go:982] volume \"local-pvbwpk4\" bound to claim \"persistent-local-volumes-test-2562/pvc-n67pq\"\nI0522 07:24:02.935470       1 pv_controller.go:823] claim \"persistent-local-volumes-test-2562/pvc-n67pq\" entered phase \"Bound\"\nE0522 07:24:03.508834       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:03.812202       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8183^4\") on node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nI0522 07:24:03.818446       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8183^4\") on node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nI0522 07:24:04.050767       1 namespace_controller.go:185] Namespace has been deleted provisioning-5822\nI0522 07:24:04.163373       1 namespace_controller.go:185] Namespace has been deleted tables-7254\nI0522 07:24:04.380658       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8183^4\") on node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nI0522 07:24:04.408402       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-8183/pvc-c8ld7\"\nI0522 07:24:04.428889       1 pv_controller.go:640] volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" is released and reclaim policy \"Delete\" will be executed\nI0522 07:24:04.453535       1 pv_controller.go:879] volume \"pvc-6bc035de-4485-49a9-b328-3b3113b0e827\" entered phase \"Released\"\nI0522 07:24:04.465997       1 pv_controller.go:1341] isVolumeReleased[pvc-6bc035de-4485-49a9-b328-3b3113b0e827]: volume is released\nI0522 07:24:04.494217       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-8183/pvc-c8ld7\" was already processed\nI0522 07:24:04.884480       1 event.go:291] \"Event occurred\" object=\"deployment-5794/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rollover-controller to 0\"\nI0522 07:24:04.887078       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-5794/test-rollover-controller\" need=0 deleting=1\nI0522 07:24:04.887150       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-5794/test-rollover-controller\" relatedReplicaSets=[test-rollover-controller test-rollover-deployment-78bc8b888c test-rollover-deployment-98c5f4599]\nI0522 07:24:04.887232       1 controller_utils.go:602] \"Deleting pod\" controller=\"test-rollover-controller\" pod=\"deployment-5794/test-rollover-controller-kpgzt\"\nE0522 07:24:04.897812       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:04.909855       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5794/test-rollover-controller-kpgzt\" objectUID=fce0c947-a5d9-4980-801e-a6b42a821d32 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:04.911408       1 event.go:291] \"Event occurred\" object=\"deployment-5794/test-rollover-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rollover-controller-kpgzt\"\nI0522 07:24:04.968221       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5794/test-rollover-controller-kpgzt\" objectUID=fce0c947-a5d9-4980-801e-a6b42a821d32 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:05.215260       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9979/pod-1dd4b10e-685f-4261-a74c-9c3ce396e0e5\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:05.216740       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:05.235249       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-9979/pvc-779sb\"\nI0522 07:24:05.252646       1 pv_controller.go:640] volume \"local-pvvnmwq\" is released and reclaim policy \"Retain\" will be executed\nI0522 07:24:05.258980       1 pv_controller.go:879] volume \"local-pvvnmwq\" entered phase \"Released\"\nI0522 07:24:05.273263       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-9979/pvc-779sb\" was already processed\nI0522 07:24:05.405582       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-6087\nI0522 07:24:05.619316       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=2 deleting=1\nI0522 07:24:05.619967       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-58477d78f9\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:05.624184       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-58477d78f9 to 2\"\nI0522 07:24:05.626913       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-58477d78f9\" pod=\"deployment-6893/webserver-58477d78f9-n889f\"\nI0522 07:24:05.637652       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-n889f\" objectUID=80271e19-2387-44f6-bc90-1994ece70e8b kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:05.645962       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=7 creating=1\nI0522 07:24:05.647072       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-58477d78f9-n889f\"\nI0522 07:24:05.647735       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-99f7796d5 to 7\"\nI0522 07:24:05.674697       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-n889f\" objectUID=80271e19-2387-44f6-bc90-1994ece70e8b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:05.675645       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-99f7796d5-vdhz7\"\nI0522 07:24:05.924325       1 event.go:291] \"Event occurred\" object=\"provisioning-2059/awsth4wj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0522 07:24:05.972262       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=1 deleting=1\nI0522 07:24:05.972342       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-58477d78f9\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:05.972480       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-58477d78f9\" pod=\"deployment-6893/webserver-58477d78f9-c2xph\"\nI0522 07:24:05.972777       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-58477d78f9 to 1\"\nI0522 07:24:05.986401       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-c2xph\" objectUID=3e161f3d-6de0-48f1-a1e6-7dc522b24c9e kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:05.988934       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-58477d78f9-c2xph\"\nI0522 07:24:05.991417       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-c2xph\" objectUID=3e161f3d-6de0-48f1-a1e6-7dc522b24c9e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:06.608289       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621-634/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0522 07:24:06.882508       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=0 deleting=1\nI0522 07:24:06.882930       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-58477d78f9\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:06.882615       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-58477d78f9 to 0\"\nI0522 07:24:06.883185       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-58477d78f9\" pod=\"deployment-6893/webserver-58477d78f9-zhwwg\"\nI0522 07:24:06.906895       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-zhwwg\" objectUID=af2492a0-e598-4653-a37e-6d242ab89edb kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:06.910039       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:06.911363       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-58477d78f9-zhwwg\"\nI0522 07:24:06.914315       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-zhwwg\" objectUID=af2492a0-e598-4653-a37e-6d242ab89edb kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:07.110943       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621-634/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nE0522 07:24:07.375036       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:07.432765       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621-634/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0522 07:24:07.773013       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621-634/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0522 07:24:08.086138       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621-634/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0522 07:24:08.390052       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:08.494480       1 namespace_controller.go:185] Namespace has been deleted networkpolicies-6011\nE0522 07:24:08.499336       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:08.557126       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-zrt6x to be scheduled\"\nI0522 07:24:09.807804       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=6 deleting=1\nI0522 07:24:09.808097       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:09.808436       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-vdhz7\"\nI0522 07:24:09.808865       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 6\"\nI0522 07:24:09.919331       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-vdhz7\" objectUID=1828b720-9388-42ee-ae31-5cde22d365d3 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:09.920588       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-vdhz7\"\nI0522 07:24:09.953622       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-vdhz7\" objectUID=1828b720-9388-42ee-ae31-5cde22d365d3 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:10.007627       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-2621\\\" or manually created by system administrator\"\nI0522 07:24:10.008247       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-2621\\\" or manually created by system administrator\"\nI0522 07:24:10.465913       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9979\nI0522 07:24:10.809339       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9690/pvc-sdks2\"\nI0522 07:24:10.817673       1 pv_controller.go:640] volume \"local-crr7r\" is released and reclaim policy \"Retain\" will be executed\nI0522 07:24:10.821173       1 pv_controller.go:879] volume \"local-crr7r\" entered phase \"Released\"\nI0522 07:24:10.965652       1 pv_controller_base.go:505] deletion of claim \"provisioning-9690/pvc-sdks2\" was already processed\nI0522 07:24:11.014492       1 namespace_controller.go:185] Namespace has been deleted container-probe-6659\nI0522 07:24:11.086350       1 pv_controller.go:879] volume \"pvc-e0a81a92-2401-4d39-9540-848ff30af562\" entered phase \"Bound\"\nI0522 07:24:11.086412       1 pv_controller.go:982] volume \"pvc-e0a81a92-2401-4d39-9540-848ff30af562\" bound to claim \"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\"\nI0522 07:24:11.094907       1 pv_controller.go:823] claim \"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\" entered phase \"Bound\"\nI0522 07:24:11.349462       1 namespace_controller.go:185] Namespace has been deleted pods-4790\nI0522 07:24:11.523888       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"svc-latency-8527/svc-latency-rc\" need=1 creating=1\nI0522 07:24:11.617406       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-0d363483961ab8c94\nE0522 07:24:11.667230       1 tokens_controller.go:262] error synchronizing serviceaccount pods-2185/default: secrets \"default-token-jr4dx\" is forbidden: unable to create new content in namespace pods-2185 because it is being terminated\nI0522 07:24:11.673091       1 pv_controller.go:1677] volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" provisioned for claim \"provisioning-2059/awsth4wj\"\nI0522 07:24:11.673297       1 event.go:291] \"Event occurred\" object=\"provisioning-2059/awsth4wj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad using kubernetes.io/aws-ebs\"\nI0522 07:24:11.683341       1 pv_controller.go:879] volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" entered phase \"Bound\"\nI0522 07:24:11.683981       1 pv_controller.go:982] volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" bound to claim \"provisioning-2059/awsth4wj\"\nI0522 07:24:11.693099       1 pv_controller.go:823] claim \"provisioning-2059/awsth4wj\" entered phase \"Bound\"\nI0522 07:24:12.005203       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e0a81a92-2401-4d39-9540-848ff30af562\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-2621^b3fe0e1c-bace-11eb-a2fe-e2fc416d88cd\") from node \"ip-172-20-48-92.ap-northeast-2.compute.internal\" \nI0522 07:24:12.018533       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" need=40 creating=40\nW0522 07:24:12.066079       1 endpointslice_controller.go:305] Error syncing endpoint slices for service \"svc-latency-8527/latency-svc-grlt4\", retrying. Error: failed to update latency-svc-grlt4-hxqvq EndpointSlice for Service svc-latency-8527/latency-svc-grlt4: endpointslices.discovery.k8s.io \"latency-svc-grlt4-hxqvq\" not found\nI0522 07:24:12.066476       1 event.go:291] \"Event occurred\" object=\"svc-latency-8527/latency-svc-grlt4\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-8527/latency-svc-grlt4: failed to update latency-svc-grlt4-hxqvq EndpointSlice for Service svc-latency-8527/latency-svc-grlt4: endpointslices.discovery.k8s.io \\\"latency-svc-grlt4-hxqvq\\\" not found\"\nI0522 07:24:12.088810       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-vvdhq\"\nI0522 07:24:12.138153       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-vs7gk\"\nI0522 07:24:12.144589       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ldbv2\"\nI0522 07:24:12.166070       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-99jgn\"\nI0522 07:24:12.169757       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-cpvtd\"\nI0522 07:24:12.169877       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hqnpz\"\nI0522 07:24:12.172871       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-52sc9\"\nI0522 07:24:12.207471       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-w47ks\"\nI0522 07:24:12.207623       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-rcg4f\"\nI0522 07:24:12.214152       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-9fwpg\"\nI0522 07:24:12.214331       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-kxd7r\"\nI0522 07:24:12.214508       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-w8r4f\"\nI0522 07:24:12.214675       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-xvsqz\"\nI0522 07:24:12.214847       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hsmqz\"\nI0522 07:24:12.215022       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-s5npf\"\nI0522 07:24:12.255368       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hgf5d\"\nI0522 07:24:12.255585       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-gmfs8\"\nI0522 07:24:12.267740       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-4ktmx\"\nI0522 07:24:12.269888       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-7ccb4\"\nI0522 07:24:12.269982       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-xkxwk\"\nI0522 07:24:12.270073       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-pwlp2\"\nI0522 07:24:12.270118       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-lfwx9\"\nI0522 07:24:12.270198       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-b4nl5\"\nI0522 07:24:12.270227       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-zbd9x\"\nI0522 07:24:12.270312       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8fvc8\"\nI0522 07:24:12.270396       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8qjmc\"\nI0522 07:24:12.270482       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-dxkb8\"\nI0522 07:24:12.270531       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-n6lq6\"\nI0522 07:24:12.270644       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-4b9qz\"\nI0522 07:24:12.307018       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0d363483961ab8c94\") from node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:12.315094       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-x8rmt\"\nE0522 07:24:12.340535       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-grlt4.168152aeebed0634\", GenerateName:\"\", Namespace:\"svc-latency-8527\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-8527\", Name:\"latency-svc-grlt4\", UID:\"75a7820e-3d87-4159-966a-d36f473083c8\", APIVersion:\"v1\", ResourceVersion:\"34005\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-8527/latency-svc-grlt4: failed to update latency-svc-grlt4-hxqvq EndpointSlice for Service svc-latency-8527/latency-svc-grlt4: endpointslices.discovery.k8s.io \\\"latency-svc-grlt4-hxqvq\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0224a4703efee34, ext:1037731821650, loc:(*time.Location)(0x72fd420)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0224a4703efee34, ext:1037731821650, loc:(*time.Location)(0x72fd420)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-grlt4.168152aeebed0634\" is forbidden: unable to create new content in namespace svc-latency-8527 because it is being terminated' (will not retry!)\nI0522 07:24:12.353264       1 aws.go:2014] Assigned mount device cq -> volume vol-0d363483961ab8c94\nI0522 07:24:12.356144       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-smktr\"\nI0522 07:24:12.444213       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ztfmc\"\nI0522 07:24:12.491278       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-6pdz2\"\nI0522 07:24:12.545586       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-z2tbk\"\nE0522 07:24:12.569339       1 pv_controller.go:1452] error finding provisioning plugin for claim volumemode-5703/pvc-mlqv7: storageclass.storage.k8s.io \"volumemode-5703\" not found\nI0522 07:24:12.569934       1 event.go:291] \"Event occurred\" object=\"volumemode-5703/pvc-mlqv7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-5703\\\" not found\"\nI0522 07:24:12.585213       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-e0a81a92-2401-4d39-9540-848ff30af562\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-2621^b3fe0e1c-bace-11eb-a2fe-e2fc416d88cd\") from node \"ip-172-20-48-92.ap-northeast-2.compute.internal\" \nI0522 07:24:12.585399       1 event.go:291] \"Event occurred\" object=\"ephemeral-2621/inline-volume-tester-zrt6x\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e0a81a92-2401-4d39-9540-848ff30af562\\\" \"\nI0522 07:24:12.598053       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-gwklz\"\nI0522 07:24:12.644110       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8h5fp\"\nI0522 07:24:12.692384       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-kr6fh\"\nI0522 07:24:12.700569       1 aws.go:2427] AttachVolume volume=\"vol-0d363483961ab8c94\" instance=\"i-0b9275cce37678aeb\" request returned {\n  AttachTime: 2021-05-22 07:24:12.689 +0000 UTC,\n  Device: \"/dev/xvdcq\",\n  InstanceId: \"i-0b9275cce37678aeb\",\n  State: \"attaching\",\n  VolumeId: \"vol-0d363483961ab8c94\"\n}\nI0522 07:24:12.733644       1 pv_controller.go:879] volume \"local-nmzfd\" entered phase \"Available\"\nI0522 07:24:12.742852       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-jgbh9\"\nI0522 07:24:12.791482       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ptx8l\"\nI0522 07:24:12.849114       1 event.go:291] \"Event occurred\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-7gcgk\"\nW0522 07:24:12.880760       1 endpointslice_controller.go:305] Error syncing endpoint slices for service \"svc-latency-8527/latency-svc-xcglr\", retrying. Error: failed to update latency-svc-xcglr-2tnqb EndpointSlice for Service svc-latency-8527/latency-svc-xcglr: endpointslices.discovery.k8s.io \"latency-svc-xcglr-2tnqb\" not found\nI0522 07:24:12.881049       1 event.go:291] \"Event occurred\" object=\"svc-latency-8527/latency-svc-xcglr\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-8527/latency-svc-xcglr: failed to update latency-svc-xcglr-2tnqb EndpointSlice for Service svc-latency-8527/latency-svc-xcglr: endpointslices.discovery.k8s.io \\\"latency-svc-xcglr-2tnqb\\\" not found\"\nE0522 07:24:12.911161       1 tokens_controller.go:262] error synchronizing serviceaccount deployment-5794/default: secrets \"default-token-fd4lp\" is forbidden: unable to create new content in namespace deployment-5794 because it is being terminated\nI0522 07:24:12.992617       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7768/pvc-csdd7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7768\\\" or manually created by system administrator\"\nI0522 07:24:12.993030       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7768/pvc-csdd7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7768\\\" or manually created by system administrator\"\nI0522 07:24:13.021326       1 pv_controller.go:879] volume \"pvc-86621528-15d6-4d1d-96e6-0db7fed348de\" entered phase \"Bound\"\nI0522 07:24:13.021478       1 pv_controller.go:982] volume \"pvc-86621528-15d6-4d1d-96e6-0db7fed348de\" bound to claim \"csi-mock-volumes-7768/pvc-csdd7\"\nI0522 07:24:13.036492       1 pv_controller.go:823] claim \"csi-mock-volumes-7768/pvc-csdd7\" entered phase \"Bound\"\nE0522 07:24:13.179145       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-xcglr.168152af1c7c3b96\", GenerateName:\"\", Namespace:\"svc-latency-8527\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-8527\", Name:\"latency-svc-xcglr\", UID:\"c2275b1b-7ac0-4488-bcbe-31e80eebe1b6\", APIVersion:\"v1\", ResourceVersion:\"33460\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-8527/latency-svc-xcglr: failed to update latency-svc-xcglr-2tnqb EndpointSlice for Service svc-latency-8527/latency-svc-xcglr: endpointslices.discovery.k8s.io \\\"latency-svc-xcglr-2tnqb\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47347f2396, ext:1038546513291, loc:(*time.Location)(0x72fd420)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47347f2396, ext:1038546513291, loc:(*time.Location)(0x72fd420)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-xcglr.168152af1c7c3b96\" is forbidden: unable to create new content in namespace svc-latency-8527 because it is being terminated' (will not retry!)\nI0522 07:24:13.281825       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5794/test-rollover-deployment-98c5f4599\" need=1 creating=1\nI0522 07:24:13.310647       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5794/test-rollover-deployment-98c5f4599-z46zf\" objectUID=2a7075ae-86f1-4f97-a682-bba20f745724 kind=\"CiliumEndpoint\" virtual=false\nE0522 07:24:13.335372       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:13.335752       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5794/test-rollover-deployment-98c5f4599-z46zf\" objectUID=2a7075ae-86f1-4f97-a682-bba20f745724 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:13.482027       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5794/test-rollover-deployment-98c5f4599\" objectUID=3d0b3663-f78a-4f97-b09c-4b064785a0c5 kind=\"ReplicaSet\" virtual=false\nI0522 07:24:13.486087       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-5794/test-rollover-deployment\"\nI0522 07:24:13.486135       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5794/test-rollover-controller\" objectUID=52c795ee-229f-4776-8e28-d67dd8ddbe94 kind=\"ReplicaSet\" virtual=false\nI0522 07:24:13.486329       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5794/test-rollover-deployment-78bc8b888c\" objectUID=d3f2cbd4-eff0-449a-bdd3-8ce7f28da3df kind=\"ReplicaSet\" virtual=false\nI0522 07:24:13.557690       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5794/test-rollover-deployment-98c5f4599\" objectUID=3d0b3663-f78a-4f97-b09c-4b064785a0c5 kind=\"ReplicaSet\" propagationPolicy=Background\nI0522 07:24:13.558343       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5794/test-rollover-deployment-78bc8b888c\" objectUID=d3f2cbd4-eff0-449a-bdd3-8ce7f28da3df kind=\"ReplicaSet\" propagationPolicy=Background\nI0522 07:24:13.559331       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5794/test-rollover-controller\" objectUID=52c795ee-229f-4776-8e28-d67dd8ddbe94 kind=\"ReplicaSet\" propagationPolicy=Background\nI0522 07:24:13.877686       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"svc-latency-8527/latency-svc-88ppr\" err=\"Operation cannot be fulfilled on endpoints \\\"latency-svc-88ppr\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-88ppr, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: eba57294-25cd-467d-b7c0-a554913594a6, UID in object meta: \"\nI0522 07:24:13.878108       1 event.go:291] \"Event occurred\" object=\"svc-latency-8527/latency-svc-88ppr\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint svc-latency-8527/latency-svc-88ppr: Operation cannot be fulfilled on endpoints \\\"latency-svc-88ppr\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-88ppr, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: eba57294-25cd-467d-b7c0-a554913594a6, UID in object meta: \"\nE0522 07:24:14.177185       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-88ppr.168152af57e7d19d\", GenerateName:\"\", Namespace:\"svc-latency-8527\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"svc-latency-8527\", Name:\"latency-svc-88ppr\", UID:\"eba57294-25cd-467d-b7c0-a554913594a6\", APIVersion:\"v1\", ResourceVersion:\"33974\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint svc-latency-8527/latency-svc-88ppr: Operation cannot be fulfilled on endpoints \\\"latency-svc-88ppr\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-88ppr, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: eba57294-25cd-467d-b7c0-a554913594a6, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47744fef9d, ext:1039543419790, loc:(*time.Location)(0x72fd420)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47744fef9d, ext:1039543419790, loc:(*time.Location)(0x72fd420)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-88ppr.168152af57e7d19d\" is forbidden: unable to create new content in namespace svc-latency-8527 because it is being terminated' (will not retry!)\nE0522 07:24:14.188630       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:14.383420       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"svc-latency-8527/latency-svc-c4bgv\" err=\"Operation cannot be fulfilled on endpoints \\\"latency-svc-c4bgv\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-c4bgv, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0a5a3de4-0bee-44ac-a8da-ca8ffce9d53b, UID in object meta: \"\nI0522 07:24:14.383821       1 event.go:291] \"Event occurred\" object=\"svc-latency-8527/latency-svc-c4bgv\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint svc-latency-8527/latency-svc-c4bgv: Operation cannot be fulfilled on endpoints \\\"latency-svc-c4bgv\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-c4bgv, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0a5a3de4-0bee-44ac-a8da-ca8ffce9d53b, UID in object meta: \"\nE0522 07:24:14.675846       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-c4bgv.168152af760c7eea\", GenerateName:\"\", Namespace:\"svc-latency-8527\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"svc-latency-8527\", Name:\"latency-svc-c4bgv\", UID:\"0a5a3de4-0bee-44ac-a8da-ca8ffce9d53b\", APIVersion:\"v1\", ResourceVersion:\"34154\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint svc-latency-8527/latency-svc-c4bgv: Operation cannot be fulfilled on endpoints \\\"latency-svc-c4bgv\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-c4bgv, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0a5a3de4-0bee-44ac-a8da-ca8ffce9d53b, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0224a4796d9d2ea, ext:1040049139920, loc:(*time.Location)(0x72fd420)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0224a4796d9d2ea, ext:1040049139920, loc:(*time.Location)(0x72fd420)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-c4bgv.168152af760c7eea\" is forbidden: unable to create new content in namespace svc-latency-8527 because it is being terminated' (will not retry!)\nI0522 07:24:14.733812       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"svc-latency-8527/latency-svc-gzvnx\" err=\"Operation cannot be fulfilled on endpoints \\\"latency-svc-gzvnx\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-gzvnx, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7995ed7d-4007-4971-ab2d-2957519bdf46, UID in object meta: \"\nI0522 07:24:14.734146       1 event.go:291] \"Event occurred\" object=\"svc-latency-8527/latency-svc-gzvnx\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint svc-latency-8527/latency-svc-gzvnx: Operation cannot be fulfilled on endpoints \\\"latency-svc-gzvnx\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-gzvnx, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7995ed7d-4007-4971-ab2d-2957519bdf46, UID in object meta: \"\nI0522 07:24:14.815890       1 aws.go:2037] Releasing in-process attachment entry: cq -> volume vol-0d363483961ab8c94\nI0522 07:24:14.815987       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0d363483961ab8c94\") from node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:14.816220       1 event.go:291] \"Event occurred\" object=\"provisioning-2059/pod-subpath-test-dynamicpv-966q\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\\\" \"\nE0522 07:24:14.980544       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-gzvnx.168152af8aef52db\", GenerateName:\"\", Namespace:\"svc-latency-8527\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"svc-latency-8527\", Name:\"latency-svc-gzvnx\", UID:\"7995ed7d-4007-4971-ab2d-2957519bdf46\", APIVersion:\"v1\", ResourceVersion:\"33970\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint svc-latency-8527/latency-svc-gzvnx: Operation cannot be fulfilled on endpoints \\\"latency-svc-gzvnx\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-gzvnx, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7995ed7d-4007-4971-ab2d-2957519bdf46, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47abbca6db, ext:1040399549643, loc:(*time.Location)(0x72fd420)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47abbca6db, ext:1040399549643, loc:(*time.Location)(0x72fd420)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-gzvnx.168152af8aef52db\" is forbidden: unable to create new content in namespace svc-latency-8527 because it is being terminated' (will not retry!)\nI0522 07:24:15.478770       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"svc-latency-8527/latency-svc-qjv8l\" err=\"Operation cannot be fulfilled on endpoints \\\"latency-svc-qjv8l\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-qjv8l, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 006b0116-72f1-4566-90ab-cac248382b2b, UID in object meta: \"\nI0522 07:24:15.479089       1 event.go:291] \"Event occurred\" object=\"svc-latency-8527/latency-svc-qjv8l\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint svc-latency-8527/latency-svc-qjv8l: Operation cannot be fulfilled on endpoints \\\"latency-svc-qjv8l\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-qjv8l, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 006b0116-72f1-4566-90ab-cac248382b2b, UID in object meta: \"\nI0522 07:24:15.528041       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8183\nI0522 07:24:15.592359       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-8909\nE0522 07:24:15.776578       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-qjv8l.168152afb7563ba8\", GenerateName:\"\", Namespace:\"svc-latency-8527\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"svc-latency-8527\", Name:\"latency-svc-qjv8l\", UID:\"006b0116-72f1-4566-90ab-cac248382b2b\", APIVersion:\"v1\", ResourceVersion:\"34457\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint svc-latency-8527/latency-svc-qjv8l: Operation cannot be fulfilled on endpoints \\\"latency-svc-qjv8l\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-qjv8l, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 006b0116-72f1-4566-90ab-cac248382b2b, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47dc88c5a8, ext:1041144491425, loc:(*time.Location)(0x72fd420)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47dc88c5a8, ext:1041144491425, loc:(*time.Location)(0x72fd420)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-qjv8l.168152afb7563ba8\" is forbidden: unable to create new content in namespace svc-latency-8527 because it is being terminated' (will not retry!)\nI0522 07:24:15.859487       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-675875d84\" objectUID=118b924a-9c56-4132-86bd-9dfbecf3e8f8 kind=\"ControllerRevision\" virtual=false\nI0522 07:24:15.859690       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-8183-2424/csi-mockplugin\nI0522 07:24:15.859833       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-0\" objectUID=0034c4f2-6ca2-42bf-871a-a0128261b0f9 kind=\"Pod\" virtual=false\nI0522 07:24:15.864948       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-675875d84\" objectUID=118b924a-9c56-4132-86bd-9dfbecf3e8f8 kind=\"ControllerRevision\" propagationPolicy=Background\nI0522 07:24:15.865832       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-0\" objectUID=0034c4f2-6ca2-42bf-871a-a0128261b0f9 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:15.884775       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"svc-latency-8527/latency-svc-s78gj\" err=\"Operation cannot be fulfilled on endpoints \\\"latency-svc-s78gj\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-s78gj, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f58dc318-49a1-475d-bf57-66873a694e01, UID in object meta: \"\nI0522 07:24:15.884900       1 event.go:291] \"Event occurred\" object=\"svc-latency-8527/latency-svc-s78gj\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint svc-latency-8527/latency-svc-s78gj: Operation cannot be fulfilled on endpoints \\\"latency-svc-s78gj\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-s78gj, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f58dc318-49a1-475d-bf57-66873a694e01, UID in object meta: \"\nI0522 07:24:16.017559       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-attacher-65cc955cb6\" objectUID=bb4c4471-477a-47c1-9d56-fea08992603e kind=\"ControllerRevision\" virtual=false\nI0522 07:24:16.017982       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-8183-2424/csi-mockplugin-attacher\nI0522 07:24:16.018537       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-attacher-0\" objectUID=41c34f18-57f9-47c3-bfad-3282732233fd kind=\"Pod\" virtual=false\nI0522 07:24:16.026764       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-attacher-0\" objectUID=41c34f18-57f9-47c3-bfad-3282732233fd kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:16.035177       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8183-2424/csi-mockplugin-attacher-65cc955cb6\" objectUID=bb4c4471-477a-47c1-9d56-fea08992603e kind=\"ControllerRevision\" propagationPolicy=Background\nI0522 07:24:16.135646       1 pv_controller.go:930] claim \"volumemode-5703/pvc-mlqv7\" bound to volume \"local-nmzfd\"\nE0522 07:24:16.194842       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-s78gj.168152afcf89bc74\", GenerateName:\"\", Namespace:\"svc-latency-8527\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"svc-latency-8527\", Name:\"latency-svc-s78gj\", UID:\"f58dc318-49a1-475d-bf57-66873a694e01\", APIVersion:\"v1\", ResourceVersion:\"33803\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint svc-latency-8527/latency-svc-s78gj: Operation cannot be fulfilled on endpoints \\\"latency-svc-s78gj\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-8527/latency-svc-s78gj, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f58dc318-49a1-475d-bf57-66873a694e01, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47f4bc4674, ext:1041550519897, loc:(*time.Location)(0x72fd420)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0224a47f4bc4674, ext:1041550519897, loc:(*time.Location)(0x72fd420)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-s78gj.168152afcf89bc74\" is forbidden: unable to create new content in namespace svc-latency-8527 because it is being terminated' (will not retry!)\nI0522 07:24:16.230133       1 pv_controller.go:879] volume \"local-nmzfd\" entered phase \"Bound\"\nI0522 07:24:16.231127       1 pv_controller.go:982] volume \"local-nmzfd\" bound to claim \"volumemode-5703/pvc-mlqv7\"\nI0522 07:24:16.289911       1 pv_controller.go:823] claim \"volumemode-5703/pvc-mlqv7\" entered phase \"Bound\"\nE0522 07:24:16.643944       1 tokens_controller.go:262] error synchronizing serviceaccount svc-latency-8527/default: serviceaccounts \"default\" not found\nI0522 07:24:16.651072       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-8527/svc-latency-rc-clss9\" objectUID=4aeeb4cc-8ae9-497c-8988-81d245e45efd kind=\"Pod\" virtual=false\nE0522 07:24:16.680619       1 namespace_controller.go:162] deletion of namespace svc-latency-8527 failed: unexpected items still remain in namespace: svc-latency-8527 for gvr: /v1, Resource=pods\nE0522 07:24:16.853085       1 namespace_controller.go:162] deletion of namespace svc-latency-8527 failed: unexpected items still remain in namespace: svc-latency-8527 for gvr: /v1, Resource=pods\nI0522 07:24:16.871353       1 namespace_controller.go:185] Namespace has been deleted pods-2185\nE0522 07:24:17.206370       1 namespace_controller.go:162] deletion of namespace svc-latency-8527 failed: unexpected items still remain in namespace: svc-latency-8527 for gvr: /v1, Resource=pods\nE0522 07:24:17.505191       1 namespace_controller.go:162] deletion of namespace svc-latency-8527 failed: unexpected items still remain in namespace: svc-latency-8527 for gvr: /v1, Resource=pods\nI0522 07:24:18.047857       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [kubectl.example.com/v1, Resource=e2e-test-kubectl-3030-crds], removed: [crd-publish-openapi-test-multi-ver.example.com/v3, Resource=e2e-test-crd-publish-openapi-3832-crds]\nI0522 07:24:18.048251       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-kubectl-3030-crds.kubectl.example.com\nI0522 07:24:18.048414       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0522 07:24:18.048731       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (19h34m32.649212209s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0522 07:24:18.106228       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [kubectl.example.com/v1, Resource=e2e-test-kubectl-3030-crds], removed: [crd-publish-openapi-test-multi-ver.example.com/v3, Resource=e2e-test-crd-publish-openapi-3832-crds]\nI0522 07:24:18.148715       1 shared_informer.go:247] Caches are synced for resource quota \nI0522 07:24:18.148916       1 resource_quota_controller.go:454] synced quota controller\nI0522 07:24:18.150371       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0522 07:24:18.150562       1 shared_informer.go:247] Caches are synced for garbage collector \nI0522 07:24:18.150649       1 garbagecollector.go:254] synced garbage collector\nE0522 07:24:18.269959       1 namespace_controller.go:162] deletion of namespace svc-latency-8527 failed: unexpected items still remain in namespace: svc-latency-8527 for gvr: /v1, Resource=pods\nE0522 07:24:18.443420       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0522 07:24:18.484561       1 namespace_controller.go:162] deletion of namespace svc-latency-8527 failed: unexpected items still remain in namespace: svc-latency-8527 for gvr: /v1, Resource=pods\nI0522 07:24:18.488725       1 event.go:291] \"Event occurred\" object=\"volume-expand-6779/awsrmn7t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0522 07:24:18.819477       1 namespace_controller.go:185] Namespace has been deleted deployment-5794\nI0522 07:24:18.843044       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2562/pod-dcc10c5c-cabf-4de1-943a-2ee30d851875\" PVC=\"persistent-local-volumes-test-2562/pvc-n67pq\"\nI0522 07:24:18.843064       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2562/pvc-n67pq\"\nI0522 07:24:18.845852       1 namespace_controller.go:185] Namespace has been deleted endpointslice-5298\nI0522 07:24:19.622917       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-7646/inline-volume-tester-crvr7\" PVC=\"ephemeral-7646/inline-volume-tester-crvr7-my-volume-0\"\nI0522 07:24:19.623041       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7646/inline-volume-tester-crvr7-my-volume-0\"\nI0522 07:24:19.823552       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-7646/inline-volume-tester-crvr7-my-volume-0\"\nI0522 07:24:19.828149       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646/inline-volume-tester-crvr7\" objectUID=2a208ac9-0ded-4ef8-9980-887dde4e680f kind=\"Pod\" virtual=false\nI0522 07:24:19.830823       1 pv_controller.go:640] volume \"pvc-ad39a079-3d41-42d8-8a6c-4d4a4e68f53a\" is released and reclaim policy \"Delete\" will be executed\nI0522 07:24:19.831204       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-7646, name: inline-volume-tester-crvr7, uid: 2a208ac9-0ded-4ef8-9980-887dde4e680f]\nI0522 07:24:19.834176       1 pv_controller.go:879] volume \"pvc-ad39a079-3d41-42d8-8a6c-4d4a4e68f53a\" entered phase \"Released\"\nI0522 07:24:19.839525       1 pv_controller.go:1341] isVolumeReleased[pvc-ad39a079-3d41-42d8-8a6c-4d4a4e68f53a]: volume is released\nI0522 07:24:19.852702       1 pv_controller_base.go:505] deletion of claim \"ephemeral-7646/inline-volume-tester-crvr7-my-volume-0\" was already processed\nI0522 07:24:20.147017       1 namespace_controller.go:185] Namespace has been deleted services-7041\nE0522 07:24:20.208552       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0522 07:24:20.769907       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0522 07:24:20.943610       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nE0522 07:24:21.076434       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nE0522 07:24:21.200535       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nE0522 07:24:21.367009       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nE0522 07:24:21.568779       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nE0522 07:24:21.661482       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:21.682959       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ad39a079-3d41-42d8-8a6c-4d4a4e68f53a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7646^764b3fab-bace-11eb-8dd7-9233d21c03eb\") on node \"ip-172-20-48-92.ap-northeast-2.compute.internal\" \nI0522 07:24:21.685485       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-ad39a079-3d41-42d8-8a6c-4d4a4e68f53a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7646^764b3fab-bace-11eb-8dd7-9233d21c03eb\") on node \"ip-172-20-48-92.ap-northeast-2.compute.internal\" \nE0522 07:24:21.794401       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nE0522 07:24:22.102936       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nI0522 07:24:22.265616       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-ad39a079-3d41-42d8-8a6c-4d4a4e68f53a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7646^764b3fab-bace-11eb-8dd7-9233d21c03eb\") on node \"ip-172-20-48-92.ap-northeast-2.compute.internal\" \nE0522 07:24:22.387026       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9690/default: secrets \"default-token-cpkpm\" is forbidden: unable to create new content in namespace provisioning-9690 because it is being terminated\nI0522 07:24:23.351062       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 3\"\nI0522 07:24:23.363771       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-58477d78f9\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:23.382055       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=2 creating=2\nI0522 07:24:23.382943       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 2\"\nI0522 07:24:23.394236       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:23.394846       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-csjms\"\nI0522 07:24:23.407956       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-vrtmt\"\nI0522 07:24:23.411357       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 5\"\nI0522 07:24:23.412520       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=5 deleting=1\nI0522 07:24:23.412566       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:23.412734       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-rflg4\"\nI0522 07:24:23.450298       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-58477d78f9\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:23.474229       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-rflg4\" objectUID=5a90c5aa-d02f-46f9-8509-f1869ba49d8c kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:23.477575       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-rflg4\"\nI0522 07:24:23.490647       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 3\"\nI0522 07:24:23.508834       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-rflg4\" objectUID=5a90c5aa-d02f-46f9-8509-f1869ba49d8c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:23.546168       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=3 creating=1\nI0522 07:24:23.556509       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-wkqbz\"\nE0522 07:24:23.651712       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nI0522 07:24:23.790800       1 namespace_controller.go:185] Namespace has been deleted svc-latency-8527\nI0522 07:24:23.843397       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2562/pod-dcc10c5c-cabf-4de1-943a-2ee30d851875\" PVC=\"persistent-local-volumes-test-2562/pvc-n67pq\"\nI0522 07:24:23.843547       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2562/pvc-n67pq\"\nI0522 07:24:24.042279       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2562/pod-dcc10c5c-cabf-4de1-943a-2ee30d851875\" PVC=\"persistent-local-volumes-test-2562/pvc-n67pq\"\nI0522 07:24:24.042304       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2562/pvc-n67pq\"\nI0522 07:24:24.044828       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-2562/pvc-n67pq\"\nI0522 07:24:24.051358       1 pv_controller.go:640] volume \"local-pvbwpk4\" is released and reclaim policy \"Retain\" will be executed\nI0522 07:24:24.054100       1 pv_controller.go:879] volume \"local-pvbwpk4\" entered phase \"Released\"\nI0522 07:24:24.060230       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-2562/pvc-n67pq\" was already processed\nI0522 07:24:24.161641       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-0634563407ca5ce66\nI0522 07:24:24.211597       1 pv_controller.go:1677] volume \"pvc-46b425af-73d6-4c73-91fb-97969550dc66\" provisioned for claim \"volume-expand-6779/awsrmn7t\"\nI0522 07:24:24.211830       1 event.go:291] \"Event occurred\" object=\"volume-expand-6779/awsrmn7t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-46b425af-73d6-4c73-91fb-97969550dc66 using kubernetes.io/aws-ebs\"\nI0522 07:24:24.214716       1 pv_controller.go:879] volume \"pvc-46b425af-73d6-4c73-91fb-97969550dc66\" entered phase \"Bound\"\nI0522 07:24:24.214891       1 pv_controller.go:982] volume \"pvc-46b425af-73d6-4c73-91fb-97969550dc66\" bound to claim \"volume-expand-6779/awsrmn7t\"\nI0522 07:24:24.227518       1 pv_controller.go:823] claim \"volume-expand-6779/awsrmn7t\" entered phase \"Bound\"\nE0522 07:24:24.461529       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nI0522 07:24:24.906450       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-46b425af-73d6-4c73-91fb-97969550dc66\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0634563407ca5ce66\") from node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:24.952068       1 aws.go:2014] Assigned mount device cg -> volume vol-0634563407ca5ce66\nI0522 07:24:25.289260       1 aws.go:2427] AttachVolume volume=\"vol-0634563407ca5ce66\" instance=\"i-0b9275cce37678aeb\" request returned {\n  AttachTime: 2021-05-22 07:24:25.278 +0000 UTC,\n  Device: \"/dev/xvdcg\",\n  InstanceId: \"i-0b9275cce37678aeb\",\n  State: \"attaching\",\n  VolumeId: \"vol-0634563407ca5ce66\"\n}\nE0522 07:24:25.334940       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-5703/inline-volume-hztfd-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0522 07:24:25.335451       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703/inline-volume-hztfd-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0522 07:24:25.364041       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=4 deleting=1\nI0522 07:24:25.364073       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:25.364381       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-8qr6s\"\nI0522 07:24:25.370427       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 4\"\nI0522 07:24:25.391198       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=4 creating=1\nI0522 07:24:25.394138       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 4\"\nI0522 07:24:25.397437       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-8qr6s\"\nI0522 07:24:25.413321       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-8qr6s\" objectUID=9f7126c3-6c6e-4630-a2b2-fd75e7e621b1 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:25.415086       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-th875\"\nI0522 07:24:25.435552       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-8qr6s\" objectUID=9f7126c3-6c6e-4630-a2b2-fd75e7e621b1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:25.471957       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:25.491404       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6893/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0522 07:24:25.629191       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-1923/busybox-c90fe1bd-228b-4011-8844-d51812977ce9\" objectUID=780a39d8-dd3f-4bb4-80e6-aa17613afb37 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:25.632466       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-1923/busybox-c90fe1bd-228b-4011-8844-d51812977ce9\" objectUID=780a39d8-dd3f-4bb4-80e6-aa17613afb37 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0522 07:24:25.687276       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:25.791450       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-5703, name: inline-volume-hztfd, uid: fc0c5c37-ecab-44ef-820a-f4cd3b10d961] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0522 07:24:25.791683       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5703/inline-volume-hztfd-my-volume\" objectUID=fd207bcb-ee9d-49ce-8f28-8d4b9c165ca1 kind=\"PersistentVolumeClaim\" virtual=false\nI0522 07:24:25.791808       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5703/inline-volume-hztfd\" objectUID=fc0c5c37-ecab-44ef-820a-f4cd3b10d961 kind=\"Pod\" virtual=false\nI0522 07:24:25.797483       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-5703, name: inline-volume-hztfd-my-volume, uid: fd207bcb-ee9d-49ce-8f28-8d4b9c165ca1] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-5703, name: inline-volume-hztfd, uid: fc0c5c37-ecab-44ef-820a-f4cd3b10d961] is deletingDependents\nI0522 07:24:25.798689       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5703/inline-volume-hztfd-my-volume\" objectUID=fd207bcb-ee9d-49ce-8f28-8d4b9c165ca1 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0522 07:24:25.804668       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5703/inline-volume-hztfd-my-volume\" objectUID=fd207bcb-ee9d-49ce-8f28-8d4b9c165ca1 kind=\"PersistentVolumeClaim\" virtual=false\nE0522 07:24:25.804995       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-5703/inline-volume-hztfd-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0522 07:24:25.805213       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703/inline-volume-hztfd-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0522 07:24:25.811765       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-5703/inline-volume-hztfd-my-volume\"\nI0522 07:24:25.819628       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5703/inline-volume-hztfd-my-volume\" objectUID=fd207bcb-ee9d-49ce-8f28-8d4b9c165ca1 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0522 07:24:25.824954       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5703/inline-volume-hztfd\" objectUID=fc0c5c37-ecab-44ef-820a-f4cd3b10d961 kind=\"Pod\" virtual=false\nE0522 07:24:25.829230       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"PersistentVolumeClaim\", Name:\"inline-volume-hztfd-my-volume\", UID:\"fd207bcb-ee9d-49ce-8f28-8d4b9c165ca1\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"ephemeral-5703\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"inline-volume-hztfd\", UID:\"fc0c5c37-ecab-44ef-820a-f4cd3b10d961\", Controller:(*bool)(0xc002aeb34a), BlockOwnerDeletion:(*bool)(0xc002aeb34b)}}}: persistentvolumeclaims \"inline-volume-hztfd-my-volume\" not found\nI0522 07:24:25.831826       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-5703, name: inline-volume-hztfd, uid: fc0c5c37-ecab-44ef-820a-f4cd3b10d961]\nI0522 07:24:25.834493       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5703/inline-volume-hztfd-my-volume\" objectUID=fd207bcb-ee9d-49ce-8f28-8d4b9c165ca1 kind=\"PersistentVolumeClaim\" virtual=false\nI0522 07:24:25.930884       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0b4df4055e38a01c1\") on node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" \nI0522 07:24:25.942486       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0b4df4055e38a01c1\") on node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" \nE0522 07:24:26.003022       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nI0522 07:24:26.630123       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=3 deleting=1\nI0522 07:24:26.630247       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:26.630419       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-xbwh7\"\nI0522 07:24:26.631227       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 3\"\nI0522 07:24:26.645695       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=5 creating=1\nI0522 07:24:26.646326       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 5\"\nI0522 07:24:26.650063       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-xbwh7\" objectUID=52d22c79-4d53-440b-b19d-c986c7e5d4e0 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:26.654138       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-xbwh7\"\nI0522 07:24:26.659984       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-xbwh7\" objectUID=52d22c79-4d53-440b-b19d-c986c7e5d4e0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:26.666526       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-m7s5r\"\nE0522 07:24:26.936145       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:27.014120       1 event.go:291] \"Event occurred\" object=\"volume-9870/awsgcvgd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0522 07:24:27.421073       1 aws.go:2037] Releasing in-process attachment entry: cg -> volume vol-0634563407ca5ce66\nI0522 07:24:27.423782       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-46b425af-73d6-4c73-91fb-97969550dc66\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0634563407ca5ce66\") from node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:27.423879       1 event.go:291] \"Event occurred\" object=\"volume-expand-6779/pod-97fbe5c2-8d3a-4357-9991-487c7043b417\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-46b425af-73d6-4c73-91fb-97969550dc66\\\" \"\nE0522 07:24:27.491329       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-7646/default: secrets \"default-token-wkrlv\" is forbidden: unable to create new content in namespace ephemeral-7646 because it is being terminated\nI0522 07:24:27.671738       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 2\"\nI0522 07:24:27.671934       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=2 deleting=1\nI0522 07:24:27.671972       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:27.672066       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-6rk4l\"\nI0522 07:24:27.679779       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6893/webserver-58477d78f9\" need=6 creating=1\nI0522 07:24:27.680507       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-58477d78f9 to 6\"\nI0522 07:24:27.689649       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-6rk4l\" objectUID=a87e2caa-afab-4149-92bf-871fbb172b9d kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:27.690606       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-6rk4l\"\nI0522 07:24:27.690629       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-58477d78f9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-58477d78f9-k74m4\"\nI0522 07:24:27.707775       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-6rk4l\" objectUID=a87e2caa-afab-4149-92bf-871fbb172b9d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:27.817381       1 event.go:291] \"Event occurred\" object=\"provisioning-143/nfsthlbg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-143\\\" or manually created by system administrator\"\nI0522 07:24:27.847404       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=1 deleting=1\nI0522 07:24:27.847436       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:27.847568       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-clfxj\"\nI0522 07:24:27.848345       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 1\"\nI0522 07:24:27.863850       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-clfxj\" objectUID=271690f3-2e29-44b4-90a2-c9568771b5c2 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:27.866225       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-clfxj\"\nI0522 07:24:27.870124       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-clfxj\" objectUID=271690f3-2e29-44b4-90a2-c9568771b5c2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:28.618752       1 namespace_controller.go:185] Namespace has been deleted provisioning-9690\nE0522 07:24:28.690824       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nI0522 07:24:28.853595       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-99f7796d5 to 0\"\nI0522 07:24:28.854020       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-6893/webserver-99f7796d5\" need=0 deleting=1\nI0522 07:24:28.854236       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-6893/webserver-99f7796d5\" relatedReplicaSets=[webserver-847dcfb7fb webserver-99f7796d5 webserver-58477d78f9]\nI0522 07:24:28.854409       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-99f7796d5\" pod=\"deployment-6893/webserver-99f7796d5-zzkqp\"\nI0522 07:24:28.862498       1 event.go:291] \"Event occurred\" object=\"deployment-6893/webserver-99f7796d5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-99f7796d5-zzkqp\"\nI0522 07:24:28.863215       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5-zzkqp\" objectUID=41e6274e-8f11-4eb5-9b1f-b5bf49d77716 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:28.871847       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5-zzkqp\" objectUID=41e6274e-8f11-4eb5-9b1f-b5bf49d77716 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0522 07:24:29.407774       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0522 07:24:29.627363       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:30.425287       1 pv_controller.go:879] volume \"pvc-aa615edc-9ebf-431c-82ea-e4a9f216824f\" entered phase \"Bound\"\nI0522 07:24:30.425311       1 pv_controller.go:982] volume \"pvc-aa615edc-9ebf-431c-82ea-e4a9f216824f\" bound to claim \"provisioning-143/nfsthlbg\"\nI0522 07:24:30.433026       1 pv_controller.go:823] claim \"provisioning-143/nfsthlbg\" entered phase \"Bound\"\nI0522 07:24:30.494358       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703-6888/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0522 07:24:30.784343       1 event.go:291] \"Event occurred\" object=\"volumemode-7718-679/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0522 07:24:30.992247       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703-6888/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0522 07:24:31.000662       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-2975/test-cleanup-deployment\"\nE0522 07:24:31.133973       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-1923/default: secrets \"default-token-qwgvs\" is forbidden: unable to create new content in namespace container-probe-1923 because it is being terminated\nI0522 07:24:31.290742       1 event.go:291] \"Event occurred\" object=\"volumemode-7718-679/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0522 07:24:31.300181       1 namespace_controller.go:185] Namespace has been deleted subpath-8026\nI0522 07:24:31.316039       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703-6888/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0522 07:24:31.331162       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0b4df4055e38a01c1\") on node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" \nI0522 07:24:31.384443       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0b4df4055e38a01c1\") from node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:31.434835       1 aws.go:2014] Assigned mount device ck -> volume vol-0b4df4055e38a01c1\nI0522 07:24:31.624904       1 event.go:291] \"Event occurred\" object=\"volumemode-7718-679/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0522 07:24:31.650520       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703-6888/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0522 07:24:31.809037       1 aws.go:2427] AttachVolume volume=\"vol-0b4df4055e38a01c1\" instance=\"i-0b9275cce37678aeb\" request returned {\n  AttachTime: 2021-05-22 07:24:31.796 +0000 UTC,\n  Device: \"/dev/xvdck\",\n  InstanceId: \"i-0b9275cce37678aeb\",\n  State: \"attaching\",\n  VolumeId: \"vol-0b4df4055e38a01c1\"\n}\nI0522 07:24:31.907293       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8183-2424\nI0522 07:24:31.957829       1 event.go:291] \"Event occurred\" object=\"volumemode-7718-679/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0522 07:24:31.965553       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-attacher-nr7pc\" objectUID=4fb2ef03-e1c0-49df-ab9d-0f00bd974407 kind=\"EndpointSlice\" virtual=false\nI0522 07:24:31.969802       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-attacher-nr7pc\" objectUID=4fb2ef03-e1c0-49df-ab9d-0f00bd974407 kind=\"EndpointSlice\" propagationPolicy=Background\nI0522 07:24:31.980439       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703-6888/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0522 07:24:32.027969       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2562\nI0522 07:24:32.141820       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-attacher-86cb588f84\" objectUID=41e4cba7-0be4-476b-a281-6d52d79e5a8f kind=\"ControllerRevision\" virtual=false\nI0522 07:24:32.142244       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-7646-3433/csi-hostpath-attacher\nI0522 07:24:32.142386       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-attacher-0\" objectUID=27828901-a277-4121-ba60-4ccafc85af68 kind=\"Pod\" virtual=false\nI0522 07:24:32.143806       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-attacher-86cb588f84\" objectUID=41e4cba7-0be4-476b-a281-6d52d79e5a8f kind=\"ControllerRevision\" propagationPolicy=Background\nI0522 07:24:32.144610       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-attacher-0\" objectUID=27828901-a277-4121-ba60-4ccafc85af68 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:32.293070       1 event.go:291] \"Event occurred\" object=\"volumemode-7718-679/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0522 07:24:32.430246       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703/inline-volume-tester-7cr5w-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-5703\\\" or manually created by system administrator\"\nI0522 07:24:32.430423       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703/inline-volume-tester-7cr5w-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-5703\\\" or manually created by system administrator\"\nI0522 07:24:32.467616       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpathplugin-q267w\" objectUID=b3d1c906-0ec3-40f4-8280-2f82dc9e8994 kind=\"EndpointSlice\" virtual=false\nI0522 07:24:32.471227       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpathplugin-q267w\" objectUID=b3d1c906-0ec3-40f4-8280-2f82dc9e8994 kind=\"EndpointSlice\" propagationPolicy=Background\nI0522 07:24:32.636881       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpathplugin-6f4948d468\" objectUID=41324ccd-a503-4e88-a2ed-2ce3a7fd4bdc kind=\"ControllerRevision\" virtual=false\nI0522 07:24:32.637092       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-7646-3433/csi-hostpathplugin\nI0522 07:24:32.637144       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpathplugin-0\" objectUID=80af6a42-6053-4404-9314-d569de3eae48 kind=\"Pod\" virtual=false\nI0522 07:24:32.638870       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpathplugin-6f4948d468\" objectUID=41324ccd-a503-4e88-a2ed-2ce3a7fd4bdc kind=\"ControllerRevision\" propagationPolicy=Background\nI0522 07:24:32.639142       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpathplugin-0\" objectUID=80af6a42-6053-4404-9314-d569de3eae48 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:32.682000       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-2621, name: inline-volume-tester-zrt6x, uid: 2e4ac0d0-dfe0-4aaa-867a-40624aa6e97f] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0522 07:24:32.682647       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\" objectUID=e0a81a92-2401-4d39-9540-848ff30af562 kind=\"PersistentVolumeClaim\" virtual=false\nI0522 07:24:32.682900       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x\" objectUID=1eee0d0b-ef51-4a3b-ad77-732725c8d450 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:32.682933       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x\" objectUID=2e4ac0d0-dfe0-4aaa-867a-40624aa6e97f kind=\"Pod\" virtual=false\nI0522 07:24:32.686075       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-2621, name: inline-volume-tester-zrt6x-my-volume-0, uid: e0a81a92-2401-4d39-9540-848ff30af562] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-2621, name: inline-volume-tester-zrt6x, uid: 2e4ac0d0-dfe0-4aaa-867a-40624aa6e97f] is deletingDependents\nI0522 07:24:32.686112       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-2621, name: inline-volume-tester-zrt6x, uid: 1eee0d0b-ef51-4a3b-ad77-732725c8d450] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-2621, name: inline-volume-tester-zrt6x, uid: 2e4ac0d0-dfe0-4aaa-867a-40624aa6e97f] is deletingDependents\nI0522 07:24:32.688105       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\" objectUID=e0a81a92-2401-4d39-9540-848ff30af562 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0522 07:24:32.689031       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x\" objectUID=1eee0d0b-ef51-4a3b-ad77-732725c8d450 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:32.690543       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-00f69c3b4351d31e7\nI0522 07:24:32.692581       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\" objectUID=e0a81a92-2401-4d39-9540-848ff30af562 kind=\"PersistentVolumeClaim\" virtual=false\nI0522 07:24:32.693082       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-2621/inline-volume-tester-zrt6x\" PVC=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\"\nI0522 07:24:32.693099       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\"\nI0522 07:24:32.697572       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x\" objectUID=2e4ac0d0-dfe0-4aaa-867a-40624aa6e97f kind=\"Pod\" virtual=false\nI0522 07:24:32.699470       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x\" objectUID=1eee0d0b-ef51-4a3b-ad77-732725c8d450 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:32.720165       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-2621, name: inline-volume-tester-zrt6x-my-volume-0, uid: e0a81a92-2401-4d39-9540-848ff30af562] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-2621, name: inline-volume-tester-zrt6x, uid: 2e4ac0d0-dfe0-4aaa-867a-40624aa6e97f] is deletingDependents\nI0522 07:24:32.720212       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2621/inline-volume-tester-zrt6x-my-volume-0\" objectUID=e0a81a92-2401-4d39-9540-848ff30af562 kind=\"PersistentVolumeClaim\" virtual=false\nI0522 07:24:32.736501       1 namespace_controller.go:185] Namespace has been deleted ephemeral-7646\nI0522 07:24:32.741767       1 pv_controller.go:1677] volume \"pvc-3f52de0b-72db-469b-8679-dffc0f012378\" provisioned for claim \"volume-9870/awsgcvgd\"\nI0522 07:24:32.742057       1 event.go:291] \"Event occurred\" object=\"volume-9870/awsgcvgd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-3f52de0b-72db-469b-8679-dffc0f012378 using kubernetes.io/aws-ebs\"\nI0522 07:24:32.747266       1 pv_controller.go:879] volume \"pvc-3f52de0b-72db-469b-8679-dffc0f012378\" entered phase \"Bound\"\nI0522 07:24:32.747297       1 pv_controller.go:982] volume \"pvc-3f52de0b-72db-469b-8679-dffc0f012378\" bound to claim \"volume-9870/awsgcvgd\"\nI0522 07:24:32.754802       1 pv_controller.go:823] claim \"volume-9870/awsgcvgd\" entered phase \"Bound\"\nI0522 07:24:32.773677       1 event.go:291] \"Event occurred\" object=\"volumemode-7718/csi-hostpath8fx7m\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-7718\\\" or manually created by system administrator\"\nI0522 07:24:32.773971       1 event.go:291] \"Event occurred\" object=\"volumemode-7718/csi-hostpath8fx7m\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-7718\\\" or manually created by system administrator\"\nI0522 07:24:32.794311       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-provisioner-s5dlf\" objectUID=951a5d7b-0e15-4eba-a482-bb500f1aad3e kind=\"EndpointSlice\" virtual=false\nI0522 07:24:32.801648       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-provisioner-s5dlf\" objectUID=951a5d7b-0e15-4eba-a482-bb500f1aad3e kind=\"EndpointSlice\" propagationPolicy=Background\nI0522 07:24:32.966903       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-provisioner-5f5764b76\" objectUID=056d2765-069c-4241-b2c1-764b4c14d106 kind=\"ControllerRevision\" virtual=false\nI0522 07:24:32.967339       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-7646-3433/csi-hostpath-provisioner\nI0522 07:24:32.967520       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-provisioner-0\" objectUID=275fdb9c-166b-44c3-9ba9-379d7d92c170 kind=\"Pod\" virtual=false\nI0522 07:24:32.970306       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-provisioner-5f5764b76\" objectUID=056d2765-069c-4241-b2c1-764b4c14d106 kind=\"ControllerRevision\" propagationPolicy=Background\nI0522 07:24:32.970596       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-provisioner-0\" objectUID=275fdb9c-166b-44c3-9ba9-379d7d92c170 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:33.141765       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-resizer-mfr72\" objectUID=e2c8175f-8800-4385-acaa-315ecb4558fa kind=\"EndpointSlice\" virtual=false\nI0522 07:24:33.150356       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-resizer-mfr72\" objectUID=e2c8175f-8800-4385-acaa-315ecb4558fa kind=\"EndpointSlice\" propagationPolicy=Background\nI0522 07:24:33.202679       1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-4067\nI0522 07:24:33.320941       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-resizer-9b58686c\" objectUID=476dc47c-83ee-4954-bdf8-ae64d9bb3d55 kind=\"ControllerRevision\" virtual=false\nI0522 07:24:33.321176       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-7646-3433/csi-hostpath-resizer\nI0522 07:24:33.321220       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-resizer-0\" objectUID=87f47592-b639-4345-945c-f39a12e2e7ce kind=\"Pod\" virtual=false\nI0522 07:24:33.330082       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-resizer-0\" objectUID=87f47592-b639-4345-945c-f39a12e2e7ce kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:33.330305       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-resizer-9b58686c\" objectUID=476dc47c-83ee-4954-bdf8-ae64d9bb3d55 kind=\"ControllerRevision\" propagationPolicy=Background\nE0522 07:24:33.355536       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:33.394272       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-3f52de0b-72db-469b-8679-dffc0f012378\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-00f69c3b4351d31e7\") from node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nI0522 07:24:33.448974       1 aws.go:2014] Assigned mount device bc -> volume vol-00f69c3b4351d31e7\nI0522 07:24:33.483791       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-snapshotter-p4nnc\" objectUID=6e45a890-4451-432c-bf93-08529d684020 kind=\"EndpointSlice\" virtual=false\nI0522 07:24:33.486546       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-snapshotter-p4nnc\" objectUID=6e45a890-4451-432c-bf93-08529d684020 kind=\"EndpointSlice\" propagationPolicy=Background\nI0522 07:24:33.647259       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-snapshotter-6744c766cf\" objectUID=a1b2d28d-74a4-46eb-ba21-19cb31c6f359 kind=\"ControllerRevision\" virtual=false\nI0522 07:24:33.647507       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-7646-3433/csi-hostpath-snapshotter\nI0522 07:24:33.647580       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7646-3433/csi-hostpath-snapshotter-0\" objectUID=fdc50559-d527-4c89-b987-86fb67d140c0 kind=\"Pod\" virtual=false\nI0522 07:24:33.649176       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-snapshotter-6744c766cf\" objectUID=a1b2d28d-74a4-46eb-ba21-19cb31c6f359 kind=\"ControllerRevision\" propagationPolicy=Background\nI0522 07:24:33.649516       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7646-3433/csi-hostpath-snapshotter-0\" objectUID=fdc50559-d527-4c89-b987-86fb67d140c0 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:33.697141       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4367/httpd\" objectUID=0c2380e0-405c-4430-b2e5-39f590fb1ca7 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:33.699438       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4367/httpd\" objectUID=0c2380e0-405c-4430-b2e5-39f590fb1ca7 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:33.820876       1 aws.go:2427] AttachVolume volume=\"vol-00f69c3b4351d31e7\" instance=\"i-013a629b5b2f1831d\" request returned {\n  AttachTime: 2021-05-22 07:24:33.807 +0000 UTC,\n  Device: \"/dev/xvdbc\",\n  InstanceId: \"i-013a629b5b2f1831d\",\n  State: \"attaching\",\n  VolumeId: \"vol-00f69c3b4351d31e7\"\n}\nI0522 07:24:33.930048       1 aws.go:2037] Releasing in-process attachment entry: ck -> volume vol-0b4df4055e38a01c1\nI0522 07:24:33.930102       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0b4df4055e38a01c1\") from node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:33.930402       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2795/pod-370d543e-451c-403f-8097-9a69e4fa91a1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-cd009246-5f2e-4a5c-8d5e-6716fb3146aa\\\" \"\nE0522 07:24:33.973811       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nI0522 07:24:34.078153       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hqnpz\" objectUID=80f7222f-20fb-42dc-9fb1-5726def693d3 kind=\"Pod\" virtual=false\nI0522 07:24:34.078349       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-xvsqz\" objectUID=65e121d1-4483-4f0e-856b-ce296598f65d kind=\"Pod\" virtual=false\nI0522 07:24:34.078570       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-w8r4f\" objectUID=773d84cb-06a1-497e-9a92-04797b4f2398 kind=\"Pod\" virtual=false\nI0522 07:24:34.078681       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-pwlp2\" objectUID=1bc9591a-3793-496e-959d-8d9ae42e40c4 kind=\"Pod\" virtual=false\nI0522 07:24:34.078790       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-zbd9x\" objectUID=1aab121a-49c3-4edc-97e1-4716649ecd95 kind=\"Pod\" virtual=false\nI0522 07:24:34.078893       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-n6lq6\" objectUID=31e48ef9-2108-4ca8-8183-9d99ca6617b9 kind=\"Pod\" virtual=false\nI0522 07:24:34.078993       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-vvdhq\" objectUID=4f2d9a44-e722-421b-99b7-fc9559d9b70a kind=\"Pod\" virtual=false\nI0522 07:24:34.079082       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-gmfs8\" objectUID=5b060edb-8c75-491f-ae86-9218deafe83c kind=\"Pod\" virtual=false\nI0522 07:24:34.079175       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-7ccb4\" objectUID=e989d6e7-4209-442e-bb22-cd1fde7a0060 kind=\"Pod\" virtual=false\nI0522 07:24:34.079285       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8qjmc\" objectUID=e2cebe44-3bd7-4c24-892c-3bce68a3a442 kind=\"Pod\" virtual=false\nI0522 07:24:34.079383       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8h5fp\" objectUID=3dc9cd27-270d-4e25-b54d-52a29dab01b2 kind=\"Pod\" virtual=false\nI0522 07:24:34.079481       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-52sc9\" objectUID=5fa550b8-b348-4692-b5fe-7362de9c5b23 kind=\"Pod\" virtual=false\nI0522 07:24:34.079581       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-b4nl5\" objectUID=223da30e-a797-413a-845e-d81963a4af00 kind=\"Pod\" virtual=false\nI0522 07:24:34.079681       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8fvc8\" objectUID=4708e8cd-292e-40e6-8df1-915a735fae43 kind=\"Pod\" virtual=false\nI0522 07:24:34.079775       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-x8rmt\" objectUID=35215e78-e3a9-4df1-a1cc-818e1da20e39 kind=\"Pod\" virtual=false\nI0522 07:24:34.079872       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-6pdz2\" objectUID=9fcd2507-383a-43fe-917d-94a2c4d519f3 kind=\"Pod\" virtual=false\nI0522 07:24:34.079100       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-xkxwk\" objectUID=6ffbf51d-8ccc-4066-9046-96d4c7fb8d2e kind=\"Pod\" virtual=false\nI0522 07:24:34.079126       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-smktr\" objectUID=1a726688-2b7f-46f8-b5af-1ac0bfcc988b kind=\"Pod\" virtual=false\nI0522 07:24:34.079140       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ztfmc\" objectUID=4bf9daa9-95e4-4806-9a81-3ce45815e7bd kind=\"Pod\" virtual=false\nI0522 07:24:34.079162       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-kr6fh\" objectUID=d2feeef8-9ea2-45aa-893f-30511b044363 kind=\"Pod\" virtual=false\nI0522 07:24:34.089997       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-kr6fh\" objectUID=d2feeef8-9ea2-45aa-893f-30511b044363 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.090514       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-b4nl5\" objectUID=223da30e-a797-413a-845e-d81963a4af00 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.090588       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-52sc9\" objectUID=5fa550b8-b348-4692-b5fe-7362de9c5b23 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.093459       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-smktr\" objectUID=1a726688-2b7f-46f8-b5af-1ac0bfcc988b kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.093779       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-n6lq6\" objectUID=31e48ef9-2108-4ca8-8183-9d99ca6617b9 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.093968       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-pwlp2\" objectUID=1bc9591a-3793-496e-959d-8d9ae42e40c4 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.090673       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hqnpz\" objectUID=80f7222f-20fb-42dc-9fb1-5726def693d3 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.090705       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-w8r4f\" objectUID=773d84cb-06a1-497e-9a92-04797b4f2398 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.090742       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-xvsqz\" objectUID=65e121d1-4483-4f0e-856b-ce296598f65d kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.090779       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-vvdhq\" objectUID=4f2d9a44-e722-421b-99b7-fc9559d9b70a kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.090818       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ztfmc\" objectUID=4bf9daa9-95e4-4806-9a81-3ce45815e7bd kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.094011       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8fvc8\" objectUID=4708e8cd-292e-40e6-8df1-915a735fae43 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.094060       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8h5fp\" objectUID=3dc9cd27-270d-4e25-b54d-52a29dab01b2 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.094096       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-gmfs8\" objectUID=5b060edb-8c75-491f-ae86-9218deafe83c kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.094139       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-zbd9x\" objectUID=1aab121a-49c3-4edc-97e1-4716649ecd95 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.094176       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-6pdz2\" objectUID=9fcd2507-383a-43fe-917d-94a2c4d519f3 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.094215       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-x8rmt\" objectUID=35215e78-e3a9-4df1-a1cc-818e1da20e39 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.094247       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-xkxwk\" objectUID=6ffbf51d-8ccc-4066-9046-96d4c7fb8d2e kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.094276       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-8qjmc\" objectUID=e2cebe44-3bd7-4c24-892c-3bce68a3a442 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.090634       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-7ccb4\" objectUID=e989d6e7-4209-442e-bb22-cd1fde7a0060 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.098359       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-gwklz\" objectUID=ad141475-332e-4497-9c37-f106a8e6e03e kind=\"Pod\" virtual=false\nI0522 07:24:34.101374       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ptx8l\" objectUID=41312fc0-ae95-4b45-a80f-c51607936f7f kind=\"Pod\" virtual=false\nI0522 07:24:34.118621       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-kxd7r\" objectUID=9b245e14-17a2-447c-9f50-1833cdb0ac37 kind=\"Pod\" virtual=false\nI0522 07:24:34.120860       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-9fwpg\" objectUID=7bc9d0a0-e0ce-4ba8-b258-3eab945fa328 kind=\"Pod\" virtual=false\nI0522 07:24:34.131968       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-lfwx9\" objectUID=8440f6a1-7cb6-4c61-8dfa-b885b4d280b1 kind=\"Pod\" virtual=false\nI0522 07:24:34.137296       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-z2tbk\" objectUID=9158baf3-2df4-4075-a185-89e6527db283 kind=\"Pod\" virtual=false\nI0522 07:24:34.157657       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-vs7gk\" objectUID=1ba7d9bb-40be-4bee-8f7a-2911461c6ef8 kind=\"Pod\" virtual=false\nI0522 07:24:34.193323       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-rcg4f\" objectUID=b6e17f17-51bc-4d13-bcdd-aaa1f43ecbcd kind=\"Pod\" virtual=false\nI0522 07:24:34.242210       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-dxkb8\" objectUID=763d9e1e-9d2c-40b5-b7ca-709b28ca0fe9 kind=\"Pod\" virtual=false\nI0522 07:24:34.273686       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-5288/externalsvc\" need=2 creating=2\nI0522 07:24:34.278400       1 event.go:291] \"Event occurred\" object=\"services-5288/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-vhp7z\"\nI0522 07:24:34.283723       1 event.go:291] \"Event occurred\" object=\"services-5288/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-wz5hx\"\nI0522 07:24:34.296088       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ldbv2\" objectUID=5635c020-960c-41e1-9bf8-cd92fcb3b778 kind=\"Pod\" virtual=false\nI0522 07:24:34.341016       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-w47ks\" objectUID=5f05565c-d3aa-442c-b468-fbbc80068ff5 kind=\"Pod\" virtual=false\nI0522 07:24:34.401827       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hsmqz\" objectUID=89229ee2-9178-45dd-a134-eb95d5e194fc kind=\"Pod\" virtual=false\nI0522 07:24:34.452331       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-99jgn\" objectUID=0bada8de-1f93-4b54-aef2-d25c11e87bf7 kind=\"Pod\" virtual=false\nI0522 07:24:34.458049       1 namespace_controller.go:185] Namespace has been deleted sysctl-6351\nI0522 07:24:34.492388       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-s5npf\" objectUID=a6e52c35-d812-4a6f-96d1-6099f1cdeafc kind=\"Pod\" virtual=false\nI0522 07:24:34.541823       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hgf5d\" objectUID=c57f8079-99f8-4357-83e8-dd2726617b5f kind=\"Pod\" virtual=false\nI0522 07:24:34.567981       1 pv_controller.go:879] volume \"pvc-67224111-752f-401c-86b5-f831beefc542\" entered phase \"Bound\"\nI0522 07:24:34.568252       1 pv_controller.go:982] volume \"pvc-67224111-752f-401c-86b5-f831beefc542\" bound to claim \"volumemode-7718/csi-hostpath8fx7m\"\nI0522 07:24:34.574013       1 pv_controller.go:823] claim \"volumemode-7718/csi-hostpath8fx7m\" entered phase \"Bound\"\nI0522 07:24:34.593298       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-4ktmx\" objectUID=c992953c-649b-44d5-8b1f-c6bc33793b86 kind=\"Pod\" virtual=false\nI0522 07:24:34.641608       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-jgbh9\" objectUID=9b38ed07-218b-42af-964e-d522e94d62d4 kind=\"Pod\" virtual=false\nI0522 07:24:34.679637       1 pv_controller.go:879] volume \"pvc-962ba1f9-e853-4577-a497-11b7cd9d01f9\" entered phase \"Bound\"\nI0522 07:24:34.679669       1 pv_controller.go:982] volume \"pvc-962ba1f9-e853-4577-a497-11b7cd9d01f9\" bound to claim \"ephemeral-5703/inline-volume-tester-7cr5w-my-volume-0\"\nI0522 07:24:34.686012       1 pv_controller.go:823] claim \"ephemeral-5703/inline-volume-tester-7cr5w-my-volume-0\" entered phase \"Bound\"\nI0522 07:24:34.702321       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-cpvtd\" objectUID=b65a8adb-a3e6-48de-92b0-994de9f9b415 kind=\"Pod\" virtual=false\nE0522 07:24:34.723732       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:34.742734       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-7gcgk\" objectUID=d89f7c81-6aef-40fe-b4b6-31508a571b7a kind=\"Pod\" virtual=false\nI0522 07:24:34.789931       1 garbagecollector.go:471] \"Processing object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-4b9qz\" objectUID=d5094161-e051-4717-b0b8-fe32e0ff4338 kind=\"Pod\" virtual=false\nI0522 07:24:34.839236       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ptx8l\" objectUID=41312fc0-ae95-4b45-a80f-c51607936f7f kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.889051       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-gwklz\" objectUID=ad141475-332e-4497-9c37-f106a8e6e03e kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.938893       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-kxd7r\" objectUID=9b245e14-17a2-447c-9f50-1833cdb0ac37 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:34.994604       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-9fwpg\" objectUID=7bc9d0a0-e0ce-4ba8-b258-3eab945fa328 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.038925       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-lfwx9\" objectUID=8440f6a1-7cb6-4c61-8dfa-b885b4d280b1 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.093316       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-z2tbk\" objectUID=9158baf3-2df4-4075-a185-89e6527db283 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.141280       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-vs7gk\" objectUID=1ba7d9bb-40be-4bee-8f7a-2911461c6ef8 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.202778       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-rcg4f\" objectUID=b6e17f17-51bc-4d13-bcdd-aaa1f43ecbcd kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.243225       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7156-808/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0522 07:24:35.252913       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-dxkb8\" objectUID=763d9e1e-9d2c-40b5-b7ca-709b28ca0fe9 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.301991       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-ldbv2\" objectUID=5635c020-960c-41e1-9bf8-cd92fcb3b778 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.332014       1 namespace_controller.go:185] Namespace has been deleted kubectl-1581\nI0522 07:24:35.348358       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-w47ks\" objectUID=5f05565c-d3aa-442c-b468-fbbc80068ff5 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.375478       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7156-808/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0522 07:24:35.406076       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hsmqz\" objectUID=89229ee2-9178-45dd-a134-eb95d5e194fc kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.438545       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-99jgn\" objectUID=0bada8de-1f93-4b54-aef2-d25c11e87bf7 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.488446       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-s5npf\" objectUID=a6e52c35-d812-4a6f-96d1-6099f1cdeafc kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.530253       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7156-808/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI0522 07:24:35.540097       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-hgf5d\" objectUID=c57f8079-99f8-4357-83e8-dd2726617b5f kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.588950       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-4ktmx\" objectUID=c992953c-649b-44d5-8b1f-c6bc33793b86 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.631084       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-67224111-752f-401c-86b5-f831beefc542\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7718^c1f8e11d-bace-11eb-b6d6-aed677ea9640\") from node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nI0522 07:24:35.639164       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-jgbh9\" objectUID=9b38ed07-218b-42af-964e-d522e94d62d4 kind=\"Pod\" propagationPolicy=Background\nE0522 07:24:35.674825       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:35.691313       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-cpvtd\" objectUID=b65a8adb-a3e6-48de-92b0-994de9f9b415 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.739279       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-7gcgk\" objectUID=d89f7c81-6aef-40fe-b4b6-31508a571b7a kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.789006       1 garbagecollector.go:580] \"Deleting object\" object=\"kubelet-4975/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-4b9qz\" objectUID=d5094161-e051-4717-b0b8-fe32e0ff4338 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:35.841454       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-9483/concurrent-27027803\" objectUID=0dc4ccd4-bbe0-402b-bc50-d9a6b3581eaf kind=\"Job\" virtual=false\nI0522 07:24:35.891328       1 request.go:668] Waited for 1.000525262s due to client-side throttling, not priority and fairness, request: DELETE:https://127.0.0.1/api/v1/namespaces/kubelet-4975/pods/cleanup40-838b5278-4f35-410e-abf2-b7b95572cc4a-gwklz\nI0522 07:24:35.895730       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-9483/concurrent-27027804\" objectUID=3479345b-718a-48a4-ace2-4ca0f2aff2ec kind=\"Job\" virtual=false\nI0522 07:24:35.918179       1 aws.go:2037] Releasing in-process attachment entry: bc -> volume vol-00f69c3b4351d31e7\nI0522 07:24:35.918222       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-3f52de0b-72db-469b-8679-dffc0f012378\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-00f69c3b4351d31e7\") from node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nI0522 07:24:35.918352       1 event.go:291] \"Event occurred\" object=\"volume-9870/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3f52de0b-72db-469b-8679-dffc0f012378\\\" \"\nI0522 07:24:35.979919       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-2059/awsth4wj\"\nI0522 07:24:36.014702       1 pv_controller.go:640] volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" is released and reclaim policy \"Delete\" will be executed\nI0522 07:24:36.029096       1 pv_controller.go:879] volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" entered phase \"Released\"\nI0522 07:24:36.032967       1 pv_controller.go:1341] isVolumeReleased[pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad]: volume is released\nI0522 07:24:36.034428       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-1113\nI0522 07:24:36.038170       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-962ba1f9-e853-4577-a497-11b7cd9d01f9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5703^c20ccac4-bace-11eb-8bd9-f2a8b2cfa0df\") from node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" \nI0522 07:24:36.149590       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-0d363483961ab8c94: error deleting EBS volume \"vol-0d363483961ab8c94\" since volume is currently attached to \"i-0b9275cce37678aeb\"\nE0522 07:24:36.149746       1 goroutinemap.go:150] Operation for \"delete-pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad[a583c047-3c7a-41d2-87fa-f35745dd9d14]\" failed. No retries permitted until 2021-05-22 07:24:36.649623796 +0000 UTC m=+1062.315389649 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0d363483961ab8c94\\\" since volume is currently attached to \\\"i-0b9275cce37678aeb\\\"\"\nI0522 07:24:36.149866       1 event.go:291] \"Event occurred\" object=\"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0d363483961ab8c94\\\" since volume is currently attached to \\\"i-0b9275cce37678aeb\\\"\"\nI0522 07:24:36.193351       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-67224111-752f-401c-86b5-f831beefc542\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-7718^c1f8e11d-bace-11eb-b6d6-aed677ea9640\") from node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" \nI0522 07:24:36.193753       1 event.go:291] \"Event occurred\" object=\"volumemode-7718/pod-8ba422c5-9575-499a-84d5-6a1c34fe7046\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-67224111-752f-401c-86b5-f831beefc542\\\" \"\nI0522 07:24:36.214979       1 namespace_controller.go:185] Namespace has been deleted container-probe-1923\nI0522 07:24:36.586518       1 event.go:291] \"Event occurred\" object=\"ephemeral-5703/inline-volume-tester-7cr5w\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-962ba1f9-e853-4577-a497-11b7cd9d01f9\\\" \"\nI0522 07:24:36.586623       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-962ba1f9-e853-4577-a497-11b7cd9d01f9\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5703^c20ccac4-bace-11eb-8bd9-f2a8b2cfa0df\") from node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" \nI0522 07:24:36.839118       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-9483/concurrent-27027803\" objectUID=0dc4ccd4-bbe0-402b-bc50-d9a6b3581eaf kind=\"Job\" propagationPolicy=Background\nI0522 07:24:36.888662       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-9483/concurrent-27027804\" objectUID=3479345b-718a-48a4-ace2-4ca0f2aff2ec kind=\"Job\" propagationPolicy=Background\nI0522 07:24:36.939304       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-9483/concurrent-27027803-vk564\" objectUID=1ac041d1-32aa-4c01-a90a-cdc0140cab51 kind=\"Pod\" virtual=false\nI0522 07:24:36.989617       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-9483/concurrent-27027804-ck75x\" objectUID=b7d93f90-9835-4dbb-9379-0681a1e224c1 kind=\"Pod\" virtual=false\nI0522 07:24:37.038664       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-9483/concurrent-27027803-vk564\" objectUID=1ac041d1-32aa-4c01-a90a-cdc0140cab51 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:37.088238       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-9483/concurrent-27027804-ck75x\" objectUID=b7d93f90-9835-4dbb-9379-0681a1e224c1 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:37.984596       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-847dcfb7fb\" objectUID=a7c221a1-3c34-4ded-82d3-f915810e9385 kind=\"ReplicaSet\" virtual=false\nI0522 07:24:37.984860       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-99f7796d5\" objectUID=0a0ba30c-3ce2-4217-a33d-ec629347aec5 kind=\"ReplicaSet\" virtual=false\nI0522 07:24:37.984915       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-6893/webserver\"\nI0522 07:24:37.984931       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9\" objectUID=6ca3321d-c215-4762-867b-ea765a14d802 kind=\"ReplicaSet\" virtual=false\nI0522 07:24:37.989579       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9\" objectUID=6ca3321d-c215-4762-867b-ea765a14d802 kind=\"ReplicaSet\" propagationPolicy=Background\nI0522 07:24:37.989878       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-847dcfb7fb\" objectUID=a7c221a1-3c34-4ded-82d3-f915810e9385 kind=\"ReplicaSet\" propagationPolicy=Background\nI0522 07:24:37.990123       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-99f7796d5\" objectUID=0a0ba30c-3ce2-4217-a33d-ec629347aec5 kind=\"ReplicaSet\" propagationPolicy=Background\nI0522 07:24:37.999147       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-vrtmt\" objectUID=8f2459bf-aad4-4864-a719-1669557c09ea kind=\"Pod\" virtual=false\nI0522 07:24:37.999437       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-wkqbz\" objectUID=f9ab58e2-03c2-4db6-894e-12c9ac11cde0 kind=\"Pod\" virtual=false\nI0522 07:24:37.999690       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-th875\" objectUID=f8c63881-5e9c-4e70-887b-31ea1bdbeec0 kind=\"Pod\" virtual=false\nI0522 07:24:37.999915       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-m7s5r\" objectUID=a33cbafc-195c-43a9-b05b-c0951a8405db kind=\"Pod\" virtual=false\nI0522 07:24:38.000136       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-k74m4\" objectUID=a09ef6a0-0607-42e8-b755-3002620aa695 kind=\"Pod\" virtual=false\nI0522 07:24:38.000317       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-csjms\" objectUID=7bae3252-4b0e-47c9-8b23-8eb65949f11f kind=\"Pod\" virtual=false\nI0522 07:24:38.001199       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-vrtmt\" objectUID=8f2459bf-aad4-4864-a719-1669557c09ea kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:38.003610       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-wkqbz\" objectUID=f9ab58e2-03c2-4db6-894e-12c9ac11cde0 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:38.006926       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-csjms\" objectUID=7bae3252-4b0e-47c9-8b23-8eb65949f11f kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:38.007523       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-th875\" objectUID=f8c63881-5e9c-4e70-887b-31ea1bdbeec0 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:38.007921       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-k74m4\" objectUID=a09ef6a0-0607-42e8-b755-3002620aa695 kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:38.008031       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-m7s5r\" objectUID=a33cbafc-195c-43a9-b05b-c0951a8405db kind=\"Pod\" propagationPolicy=Background\nI0522 07:24:38.016862       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-vrtmt\" objectUID=900a8fdd-7b68-4d89-8a15-362552a170de kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:38.023973       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-wkqbz\" objectUID=4c2f95cf-aa34-412b-a596-b9d2b0074f95 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:38.027097       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-csjms\" objectUID=26edb751-49cc-4528-ab6f-9aba8182e7e7 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:38.030185       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-th875\" objectUID=734cbd20-241f-459a-b45e-d9317008cc78 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:38.043572       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-k74m4\" objectUID=af0917f0-63c4-4361-9bc3-d57c1d3cdfd5 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:38.096551       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-m7s5r\" objectUID=fd7c02a4-d329-458f-9488-3e35eadde657 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:38.142632       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-vrtmt\" objectUID=900a8fdd-7b68-4d89-8a15-362552a170de kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:38.189473       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-wkqbz\" objectUID=4c2f95cf-aa34-412b-a596-b9d2b0074f95 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:38.239671       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-csjms\" objectUID=26edb751-49cc-4528-ab6f-9aba8182e7e7 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:38.289224       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-th875\" objectUID=734cbd20-241f-459a-b45e-d9317008cc78 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:38.340904       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-k74m4\" objectUID=af0917f0-63c4-4361-9bc3-d57c1d3cdfd5 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0522 07:24:38.391682       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6893/webserver-58477d78f9-m7s5r\" objectUID=fd7c02a4-d329-458f-9488-3e35eadde657 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0522 07:24:38.443084       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"webserver-58477d78f9-vrtmt\", UID:\"900a8fdd-7b68-4d89-8a15-362552a170de\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-6893\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-58477d78f9-vrtmt\", UID:\"8f2459bf-aad4-4864-a719-1669557c09ea\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0xc003215376)}}}: ciliumendpoints.cilium.io \"webserver-58477d78f9-vrtmt\" not found\nI0522 07:24:38.448396       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-vrtmt\" objectUID=900a8fdd-7b68-4d89-8a15-362552a170de kind=\"CiliumEndpoint\" virtual=false\nE0522 07:24:38.539604       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"webserver-58477d78f9-csjms\", UID:\"26edb751-49cc-4528-ab6f-9aba8182e7e7\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-6893\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-58477d78f9-csjms\", UID:\"7bae3252-4b0e-47c9-8b23-8eb65949f11f\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0xc00191dd26)}}}: ciliumendpoints.cilium.io \"webserver-58477d78f9-csjms\" not found\nI0522 07:24:38.546127       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-csjms\" objectUID=26edb751-49cc-4528-ab6f-9aba8182e7e7 kind=\"CiliumEndpoint\" virtual=false\nE0522 07:24:38.639789       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"webserver-58477d78f9-k74m4\", UID:\"af0917f0-63c4-4361-9bc3-d57c1d3cdfd5\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-6893\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-58477d78f9-k74m4\", UID:\"a09ef6a0-0607-42e8-b755-3002620aa695\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0xc0023c5886)}}}: ciliumendpoints.cilium.io \"webserver-58477d78f9-k74m4\" not found\nI0522 07:24:38.645984       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6893/webserver-58477d78f9-k74m4\" objectUID=af0917f0-63c4-4361-9bc3-d57c1d3cdfd5 kind=\"CiliumEndpoint\" virtual=false\nI0522 07:24:39.383315       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-5703/pvc-mlqv7\"\nI0522 07:24:39.396192       1 pv_controller.go:640] volume \"local-nmzfd\" is released and reclaim policy \"Retain\" will be executed\nI0522 07:24:39.399470       1 pv_controller.go:879] volume \"local-nmzfd\" entered phase \"Released\"\nI0522 07:24:39.540765       1 pv_controller_base.go:505] deletion of claim \"volumemode-5703/pvc-mlqv7\" was already processed\nE0522 07:24:39.968655       1 tokens_controller.go:262] error synchronizing serviceaccount cronjob-9483/default: secrets \"default-token-5mtw6\" is forbidden: unable to create new content in namespace cronjob-9483 because it is being terminated\nE0522 07:24:40.575152       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0522 07:24:40.624031       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-4367/default: secrets \"default-token-5hn55\" is forbidden: unable to create new content in namespace kubectl-4367 because it is being terminated\nI0522 07:24:43.226101       1 pv_controller.go:879] volume \"hostpath-mflgl\" entered phase \"Available\"\nI0522 07:24:43.702231       1 pv_controller.go:930] claim \"pv-protection-3102/pvc-9rnrz\" bound to volume \"hostpath-mflgl\"\nE0522 07:24:43.703383       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:43.710366       1 pv_controller.go:879] volume \"hostpath-mflgl\" entered phase \"Bound\"\nI0522 07:24:43.710502       1 pv_controller.go:982] volume \"hostpath-mflgl\" bound to claim \"pv-protection-3102/pvc-9rnrz\"\nI0522 07:24:43.717407       1 pv_controller.go:823] claim \"pv-protection-3102/pvc-9rnrz\" entered phase \"Bound\"\nI0522 07:24:44.023888       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0d363483961ab8c94\") on node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:44.028195       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0d363483961ab8c94\") on node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:44.090135       1 namespace_controller.go:185] Namespace has been deleted deployment-6893\nI0522 07:24:44.337134       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-protection-3102/pvc-9rnrz\"\nI0522 07:24:44.343812       1 pv_controller.go:640] volume \"hostpath-mflgl\" is released and reclaim policy \"Retain\" will be executed\nI0522 07:24:44.349638       1 pv_controller.go:879] volume \"hostpath-mflgl\" entered phase \"Released\"\nI0522 07:24:44.357876       1 pv_controller_base.go:505] deletion of claim \"pv-protection-3102/pvc-9rnrz\" was already processed\nE0522 07:24:44.436792       1 namespace_controller.go:162] deletion of namespace job-2273 failed: unexpected items still remain in namespace: job-2273 for gvr: /v1, Resource=pods\nE0522 07:24:44.977083       1 tokens_controller.go:262] error synchronizing serviceaccount projected-1766/default: secrets \"default-token-l44tc\" is forbidden: unable to create new content in namespace projected-1766 because it is being terminated\nI0522 07:24:45.045376       1 namespace_controller.go:185] Namespace has been deleted cronjob-9483\nI0522 07:24:45.974004       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7156/pvc-lt8xn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7156\\\" or manually created by system administrator\"\nI0522 07:24:45.985126       1 pv_controller.go:879] volume \"pvc-de5771df-de2f-496c-9014-9c39915367f0\" entered phase \"Bound\"\nI0522 07:24:45.985216       1 pv_controller.go:982] volume \"pvc-de5771df-de2f-496c-9014-9c39915367f0\" bound to claim \"csi-mock-volumes-7156/pvc-lt8xn\"\nI0522 07:24:45.994183       1 pv_controller.go:823] claim \"csi-mock-volumes-7156/pvc-lt8xn\" entered phase \"Bound\"\nI0522 07:24:46.148743       1 pv_controller.go:1341] isVolumeReleased[pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad]: volume is released\nI0522 07:24:46.320022       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-0d363483961ab8c94: error deleting EBS volume \"vol-0d363483961ab8c94\" since volume is currently attached to \"i-0b9275cce37678aeb\"\nE0522 07:24:46.320804       1 goroutinemap.go:150] Operation for \"delete-pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad[a583c047-3c7a-41d2-87fa-f35745dd9d14]\" failed. No retries permitted until 2021-05-22 07:24:47.320069202 +0000 UTC m=+1072.985835059 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0d363483961ab8c94\\\" since volume is currently attached to \\\"i-0b9275cce37678aeb\\\"\"\nI0522 07:24:46.320961       1 event.go:291] \"Event occurred\" object=\"pvc-e6636e4d-4b3a-4093-bdb5-cfad10f791ad\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0d363483961ab8c94\\\" since volume is currently attached to \\\"i-0b9275cce37678aeb\\\"\"\nE0522 07:24:46.325254       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0522 07:24:46.651263       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-de5771df-de2f-496c-9014-9c39915367f0\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7156^4\") from node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" \nI0522 07:24:46.908936       1 namespace_controller.go:185] Namespace has been deleted provisioning-2948\nI0522 07:24:46.979204       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-143/nfsthlbg\"\nI0522 07:24:46.990375       1 pv_controller.go:640] volume \"pvc-aa615edc-9ebf-431c-82ea-e4a9f216824f\" is released and reclaim policy \"Delete\" will be executed\nI0522 07:24:46.993319       1 pv_controller.go:879] volume \"pvc-aa615edc-9ebf-431c-82ea-e4a9f216824f\" entered phase \"Released\"\nI0522 07:24:46.996647       1 pv_controller.go:1341] isVolumeReleased[pvc-aa615edc-9ebf-431c-82ea-e4a9f216824f]: volume is released\nI0522 07:24:47.005823       1 pv_controller_base.go:505] deletion of claim \"provisioning-143/nfsthlbg\" was already processed\nI0522 07:24:47.021599       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-145/pvc-7w4l7\"\nI0522 07:24:47.030670       1 pv_controller.go:640] volume \"pvc-c5e4697d-54a5-45c2-a5fe-f9ed7acf8391\" is released and reclaim policy \"Delete\" will be executed\nI0522 07:24:47.033687       1 pv_controller.go:879] volume \"pvc-c5e4697d-54a5-45c2-a5fe-f9ed7acf8391\" entered phase \"Released\"\nI0522 07:24:47.035013       1 pv_controller.go:1341] isVolumeReleased[pvc-c5e4697d-54a5-45c2-a5fe-f9ed7acf8391]: volume is released\nI0522 07:24:47.059291       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-145/pvc-7w4l7\" was already processed\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-62-2.ap-northeast-2.compute.internal ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-62-2.ap-northeast-2.compute.internal ====\nI0522 07:06:54.476267       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0522 07:06:54.484846       1 flags.go:59] FLAG: --address=\"0.0.0.0\"\nI0522 07:06:54.484862       1 flags.go:59] FLAG: --algorithm-provider=\"\"\nI0522 07:06:54.484867       1 flags.go:59] FLAG: --allow-metric-labels=\"[]\"\nI0522 07:06:54.484877       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0522 07:06:54.484883       1 flags.go:59] FLAG: --authentication-kubeconfig=\"\"\nI0522 07:06:54.484893       1 flags.go:59] FLAG: --authentication-skip-lookup=\"false\"\nI0522 07:06:54.484899       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0522 07:06:54.484906       1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure=\"true\"\nI0522 07:06:54.484910       1 flags.go:59] FLAG: --authorization-always-allow-paths=\"[/healthz,/readyz,/livez]\"\nI0522 07:06:54.484919       1 flags.go:59] FLAG: --authorization-kubeconfig=\"\"\nI0522 07:06:54.484924       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0522 07:06:54.484932       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0522 07:06:54.484936       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0522 07:06:54.484942       1 flags.go:59] FLAG: --cert-dir=\"\"\nI0522 07:06:54.484946       1 flags.go:59] FLAG: --client-ca-file=\"\"\nI0522 07:06:54.484950       1 flags.go:59] FLAG: --config=\"/var/lib/kube-scheduler/config.yaml\"\nI0522 07:06:54.484956       1 flags.go:59] FLAG: --contention-profiling=\"true\"\nI0522 07:06:54.484961       1 flags.go:59] FLAG: --disabled-metrics=\"[]\"\nI0522 07:06:54.484967       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0522 07:06:54.484975       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0522 07:06:54.484982       1 flags.go:59] FLAG: --hard-pod-affinity-symmetric-weight=\"1\"\nI0522 07:06:54.485005       1 flags.go:59] FLAG: --help=\"false\"\nI0522 07:06:54.485010       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0522 07:06:54.485015       1 flags.go:59] FLAG: --kube-api-burst=\"100\"\nI0522 07:06:54.485020       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0522 07:06:54.485024       1 flags.go:59] FLAG: --kube-api-qps=\"50\"\nI0522 07:06:54.485031       1 flags.go:59] FLAG: --kubeconfig=\"\"\nI0522 07:06:54.485035       1 flags.go:59] FLAG: --leader-elect=\"true\"\nI0522 07:06:54.485043       1 flags.go:59] FLAG: --leader-elect-lease-duration=\"15s\"\nI0522 07:06:54.485047       1 flags.go:59] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0522 07:06:54.485052       1 flags.go:59] FLAG: --leader-elect-resource-lock=\"leases\"\nI0522 07:06:54.485056       1 flags.go:59] FLAG: --leader-elect-resource-name=\"kube-scheduler\"\nI0522 07:06:54.485060       1 flags.go:59] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0522 07:06:54.485065       1 flags.go:59] FLAG: --leader-elect-retry-period=\"2s\"\nI0522 07:06:54.485069       1 flags.go:59] FLAG: --lock-object-name=\"kube-scheduler\"\nI0522 07:06:54.485076       1 flags.go:59] FLAG: --lock-object-namespace=\"kube-system\"\nI0522 07:06:54.485080       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0522 07:06:54.485089       1 flags.go:59] FLAG: --log-dir=\"\"\nI0522 07:06:54.485094       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-scheduler.log\"\nI0522 07:06:54.485099       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0522 07:06:54.485104       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0522 07:06:54.485108       1 flags.go:59] FLAG: --logging-format=\"text\"\nI0522 07:06:54.485116       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0522 07:06:54.485120       1 flags.go:59] FLAG: --master=\"\"\nI0522 07:06:54.485124       1 flags.go:59] FLAG: --one-output=\"false\"\nI0522 07:06:54.485128       1 flags.go:59] FLAG: --permit-address-sharing=\"false\"\nI0522 07:06:54.485132       1 flags.go:59] FLAG: --permit-port-sharing=\"false\"\nI0522 07:06:54.485136       1 flags.go:59] FLAG: --policy-config-file=\"\"\nI0522 07:06:54.485140       1 flags.go:59] FLAG: --policy-configmap=\"\"\nI0522 07:06:54.485148       1 flags.go:59] FLAG: --policy-configmap-namespace=\"kube-system\"\nI0522 07:06:54.485153       1 flags.go:59] FLAG: --port=\"10251\"\nI0522 07:06:54.485158       1 flags.go:59] FLAG: --profiling=\"true\"\nI0522 07:06:54.485163       1 flags.go:59] FLAG: --requestheader-allowed-names=\"[]\"\nI0522 07:06:54.485168       1 flags.go:59] FLAG: --requestheader-client-ca-file=\"\"\nI0522 07:06:54.485172       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0522 07:06:54.485178       1 flags.go:59] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0522 07:06:54.485183       1 flags.go:59] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0522 07:06:54.485208       1 flags.go:59] FLAG: --scheduler-name=\"default-scheduler\"\nI0522 07:06:54.485213       1 flags.go:59] FLAG: --secure-port=\"10259\"\nI0522 07:06:54.485217       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0522 07:06:54.485221       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0522 07:06:54.485225       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0522 07:06:54.485229       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0522 07:06:54.485234       1 flags.go:59] FLAG: --tls-cert-file=\"\"\nI0522 07:06:54.485241       1 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"\nI0522 07:06:54.485246       1 flags.go:59] FLAG: --tls-min-version=\"\"\nI0522 07:06:54.485250       1 flags.go:59] FLAG: --tls-private-key-file=\"\"\nI0522 07:06:54.485254       1 flags.go:59] FLAG: --tls-sni-cert-key=\"[]\"\nI0522 07:06:54.485259       1 flags.go:59] FLAG: --use-legacy-policy-config=\"false\"\nI0522 07:06:54.485263       1 flags.go:59] FLAG: --v=\"2\"\nI0522 07:06:54.485268       1 flags.go:59] FLAG: --version=\"false\"\nI0522 07:06:54.485278       1 flags.go:59] FLAG: --vmodule=\"\"\nI0522 07:06:54.485282       1 flags.go:59] FLAG: --write-config-to=\"\"\nI0522 07:06:55.333009       1 serving.go:347] Generated self-signed cert in-memory\nW0522 07:06:56.052521       1 authentication.go:308] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.\nW0522 07:06:56.052541       1 authentication.go:332] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.\nW0522 07:06:56.052555       1 authorization.go:184] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.\nI0522 07:06:56.071602       1 factory.go:194] \"Creating scheduler from algorithm provider\" algorithmProvider=\"DefaultProvider\"\nI0522 07:06:56.081772       1 configfile.go:72] Using component config:\napiVersion: kubescheduler.config.k8s.io/v1beta1\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 100\n  contentType: application/vnd.kubernetes.protobuf\n  kubeconfig: /var/lib/kube-scheduler/kubeconfig\n  qps: 50\nenableContentionProfiling: true\nenableProfiling: true\nhealthzBindAddress: 0.0.0.0:10251\nkind: KubeSchedulerConfiguration\nleaderElection:\n  leaderElect: true\n  leaseDuration: 15s\n  renewDeadline: 10s\n  resourceLock: leases\n  resourceName: kube-scheduler\n  resourceNamespace: kube-system\n  retryPeriod: 2s\nmetricsBindAddress: 0.0.0.0:10251\nparallelism: 16\npercentageOfNodesToScore: 0\npodInitialBackoffSeconds: 1\npodMaxBackoffSeconds: 10\nprofiles:\n- pluginConfig:\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: DefaultPreemptionArgs\n      minCandidateNodesAbsolute: 100\n      minCandidateNodesPercentage: 10\n    name: DefaultPreemption\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      hardPodAffinityWeight: 1\n      kind: InterPodAffinityArgs\n    name: InterPodAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeAffinityArgs\n    name: NodeAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesFitArgs\n    name: NodeResourcesFit\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesLeastAllocatedArgs\n      resources:\n      - name: cpu\n        weight: 1\n      - name: memory\n        weight: 1\n    name: NodeResourcesLeastAllocated\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      defaultingType: System\n      kind: PodTopologySpreadArgs\n    name: PodTopologySpread\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      bindTimeoutSeconds: 600\n      kind: VolumeBindingArgs\n    name: VolumeBinding\n  plugins:\n    bind:\n      enabled:\n      - name: DefaultBinder\n        weight: 0\n    filter:\n      enabled:\n      - name: NodeUnschedulable\n        weight: 0\n      - name: NodeName\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: NodeResourcesFit\n        weight: 0\n      - name: VolumeRestrictions\n        weight: 0\n      - name: EBSLimits\n        weight: 0\n      - name: GCEPDLimits\n        weight: 0\n      - name: NodeVolumeLimits\n        weight: 0\n      - name: AzureDiskLimits\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: VolumeZone\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n    permit: {}\n    postBind: {}\n    postFilter:\n      enabled:\n      - name: DefaultPreemption\n        weight: 0\n    preBind:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    preFilter:\n      enabled:\n      - name: NodeResourcesFit\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n    preScore:\n      enabled:\n      - name: InterPodAffinity\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n    queueSort:\n      enabled:\n      - name: PrioritySort\n        weight: 0\n    reserve:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n\nI0522 07:06:56.081792       1 server.go:138] Starting Kubernetes Scheduler version v1.21.1\nW0522 07:06:56.085382       1 authorization.go:47] Authorization is disabled\nW0522 07:06:56.085391       1 authentication.go:47] Authentication is disabled\nI0522 07:06:56.085401       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251\nI0522 07:06:56.086870       1 tlsconfig.go:200] loaded serving cert [\"Generated self signed cert\"]: \"localhost@1621667215\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1621667215\" (2021-05-22 06:06:54 +0000 UTC to 2022-05-22 06:06:54 +0000 UTC (now=2021-05-22 07:06:56.086850066 +0000 UTC))\nI0522 07:06:56.087077       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1621667216\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1621667215\" (2021-05-22 06:06:55 +0000 UTC to 2022-05-22 06:06:55 +0000 UTC (now=2021-05-22 07:06:56.087065031 +0000 UTC))\nI0522 07:06:56.087099       1 secure_serving.go:197] Serving securely on [::]:10259\nI0522 07:06:56.087194       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0522 07:06:56.087410       1 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.087736       1 reflector.go:219] Starting reflector *v1beta1.CSIStorageCapacity (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.088012       1 reflector.go:219] Starting reflector *v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.088240       1 reflector.go:219] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.088457       1 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.088658       1 reflector.go:219] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.088895       1 reflector.go:219] Starting reflector *v1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.089141       1 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.089385       1 reflector.go:219] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.089614       1 reflector.go:219] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.094021       1 reflector.go:219] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134\nI0522 07:06:56.094300       1 reflector.go:219] Starting reflector *v1.CSIDriver (0s) from k8s.io/client-go/informers/factory.go:134\nE0522 07:06:56.094561       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.094742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.094825       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.094910       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.094996       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.095074       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.095153       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.095230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.095306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.095383       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.095461       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.095531       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nI0522 07:06:56.095581       1 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134\nE0522 07:06:56.095899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.960663       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.989394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:56.995877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.124678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.185252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.203804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.369477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.415170       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.468601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.487920       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.529258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.618896       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:57.673335       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:58.927970       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:59.001736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:59.451272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:59.565013       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:59.569357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:59.637930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:06:59.716662       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:00.114538       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:00.147832       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:00.163251       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:00.207769       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:00.233156       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:00.666426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:02.480554       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:02.622285       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:03.497710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:03.512976       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:03.552211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:03.594554       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:03.731179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:03.932846       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:04.073386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:04.814038       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nE0522 07:07:05.525803       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 127.0.0.1:443: connect: connection refused\nI0522 07:07:15.687394       1 trace.go:205] Trace[637060682]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:05.685) (total time: 10001ms):\nTrace[637060682]: [10.001414308s] [10.001414308s] END\nE0522 07:07:15.687414       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:15.959320       1 trace.go:205] Trace[919839625]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:05.958) (total time: 10000ms):\nTrace[919839625]: [10.000910528s] [10.000910528s] END\nE0522 07:07:15.959341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:20.596214       1 trace.go:205] Trace[192042856]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:10.595) (total time: 10001ms):\nTrace[192042856]: [10.00117118s] [10.00117118s] END\nE0522 07:07:20.596237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:22.038109       1 trace.go:205] Trace[612810657]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:12.036) (total time: 10001ms):\nTrace[612810657]: [10.00119016s] [10.00119016s] END\nE0522 07:07:22.038129       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:22.809457       1 trace.go:205] Trace[1622466547]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:12.808) (total time: 10000ms):\nTrace[1622466547]: [10.000553895s] [10.000553895s] END\nE0522 07:07:22.809476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:23.185269       1 trace.go:205] Trace[1636240534]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:13.183) (total time: 10001ms):\nTrace[1636240534]: [10.001409239s] [10.001409239s] END\nE0522 07:07:23.185418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:23.678753       1 trace.go:205] Trace[93664210]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:13.678) (total time: 10000ms):\nTrace[93664210]: [10.000673884s] [10.000673884s] END\nE0522 07:07:23.678777       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:23.899398       1 trace.go:205] Trace[170700263]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:13.897) (total time: 10001ms):\nTrace[170700263]: [10.001439503s] [10.001439503s] END\nE0522 07:07:23.899417       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:24.964368       1 trace.go:205] Trace[596686393]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:14.963) (total time: 10000ms):\nTrace[596686393]: [10.00088953s] [10.00088953s] END\nE0522 07:07:24.964389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get \"https://127.0.0.1/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:25.224120       1 trace.go:205] Trace[1542316586]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:15.223) (total time: 10000ms):\nTrace[1542316586]: [10.000464042s] [10.000464042s] END\nE0522 07:07:25.224287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:25.227520       1 trace.go:205] Trace[89978560]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:15.227) (total time: 10000ms):\nTrace[89978560]: [10.000376535s] [10.000376535s] END\nE0522 07:07:25.227539       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:25.814098       1 trace.go:205] Trace[1044540527]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:15.812) (total time: 10001ms):\nTrace[1044540527]: [10.001440097s] [10.001440097s] END\nE0522 07:07:25.814242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nI0522 07:07:25.969465       1 trace.go:205] Trace[1143845373]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (22-May-2021 07:07:15.968) (total time: 10001ms):\nTrace[1143845373]: [10.001133251s] [10.001133251s] END\nE0522 07:07:25.969811       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout\nE0522 07:07:28.392897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE0522 07:07:28.393109       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope\nI0522 07:07:46.252639       1 node_tree.go:65] Added node \"ip-172-20-62-2.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0522 07:07:50.188375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...\nI0522 07:07:50.196683       1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler\nI0522 07:07:50.199629       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-hsjjc\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0522 07:07:50.207833       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-operator-6d9b48547d-22fm2\" node=\"ip-172-20-62-2.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0522 07:07:50.210249       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-p87rf\" node=\"ip-172-20-62-2.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0522 07:07:50.226666       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-7wxwz\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0522 07:07:50.234175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-5f98b58844-8x87l\" node=\"ip-172-20-62-2.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0522 07:07:51.202737       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-hsjjc\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0522 07:07:52.203475       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-7wxwz\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0522 07:08:18.652280       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-hsjjc\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0522 07:08:18.652555       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-7wxwz\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0522 07:08:18.681506       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-f8glp\" node=\"ip-172-20-62-2.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0522 07:08:23.221897       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-hsjjc\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0522 07:08:23.222090       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-7wxwz\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0522 07:09:03.089540       1 node_tree.go:65] Added node \"ip-172-20-49-129.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0522 07:09:03.090081       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-hsjjc\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0522 07:09:03.109394       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-7wxwz\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0522 07:09:03.137419       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-cvjk8\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=2 feasibleNodes=1\nI0522 07:09:09.620965       1 node_tree.go:65] Added node \"ip-172-20-48-92.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0522 07:09:09.646207       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-mbxrd\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI0522 07:09:12.898128       1 node_tree.go:65] Added node \"ip-172-20-63-92.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0522 07:09:12.920545       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-p59x2\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=4 feasibleNodes=1\nI0522 07:09:13.156154       1 node_tree.go:65] Added node \"ip-172-20-35-65.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0522 07:09:13.178453       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-5pxrr\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:09:13.258105       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-hsjjc\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0522 07:09:13.281516       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-f45c4bf76-7wxwz\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0522 07:09:23.282189       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-hsjjc\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:09:24.347467       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-f45c4bf76-7wxwz\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:09:42.319621       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-f45c4bf76-tw5bz\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:30.798369       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8718/pod-subpath-test-inlinevolume-9782\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:30.821057       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6828/pod-subpath-test-inlinevolume-zjsx\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:30.921883       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-604/pod-configmaps-64e9aa87-0e37-4d85-afba-4b2f582bb5b7\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:31.239584       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-7168/pod-configmaps-100a5be9-e1c8-489f-81d3-63da4770cbfc\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:31.310628       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-5350/termination-message-container0d83c8ca-2e05-4b9d-b468-14758e47a9bd\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:31.409330       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-6999/pod-secrets-fe4a07f8-84fd-4a27-984a-c6a2dfc373b9\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:31.836595       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2358/hostexec-ip-172-20-63-92.ap-northeast-2.compute.internal-z7xr9\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:31.886540       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1323/hostexec-ip-172-20-63-92.ap-northeast-2.compute.internal-65mlv\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:32.103537       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-3926/downwardapi-volume-e7d15e56-81dd-44bb-ba9d-e91a2c00b0b0\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:32.380171       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-696/hostexec-ip-172-20-35-65.ap-northeast-2.compute.internal-lgw9f\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:32.504786       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3571/test-recreate-deployment-6cb8b65c46-4zvjl\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:33.126516       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-5168/dns-test-8b674ace-e9b2-4f62-92eb-2ae321ac06f2\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:33.324876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-648/pod1\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:34.087760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5979/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-dcp82\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:34.122954       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-6139/condition-test-mx8m2\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:34.134095       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-6139/condition-test-mjv8w\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:34.272408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5679/httpd\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:34.748071       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5554/httpd\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:35.450242       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1898/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:35.721037       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-9358/pod-2a29f885-6bde-44e8-b3f9-940016a6263d\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:36.022240       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-4905/pfpod\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:37.393098       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1898/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:37.822580       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3545-481/csi-mockplugin-0\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:37.940824       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-4550/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:37.995086       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3545-481/csi-mockplugin-attacher-0\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:39.393979       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1898/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:39.394271       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-4550/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:40.331538       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2358/pod-b14e483b-6573-48e9-8973-a16e39140fe3\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:41.502206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3571/test-recreate-deployment-85d47dcb4-v2l4w\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:42.108040       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5975/hostexec-ip-172-20-49-129.ap-northeast-2.compute.internal-sb7cx\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:42.242905       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1898/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:42.395068       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-4550/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:42.581374       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-648/pod2\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:42.810657       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3004/pod-subpath-test-inlinevolume-mwcx\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:43.146117       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1643/hostexec-ip-172-20-49-129.ap-northeast-2.compute.internal-wh6qn\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:43.395762       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1898/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:43.673923       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6236/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-74g8q\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:45.134354       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6391/pod1\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:45.294274       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6391/pod2\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:45.397384       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1898/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:12:45.455450       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6391/pod3\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:46.636522       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-689/pod-projected-configmaps-f33e4e4a-d18d-4eac-bf54-e2ea6bd1eccc\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:46.850031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5975/pod-subpath-test-preprovisionedpv-vz7z\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:47.202947       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-696/pod-subpath-test-preprovisionedpv-zfdk\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:47.818213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5979/pod-subpath-test-preprovisionedpv-s7hz\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:48.014127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-1957/metadata-volume-33b0b4e3-4eea-4e6b-bc30-57ab315828ab\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:48.192204       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-4362/pod-9e60d84f-4aaa-4bbf-922f-29b33b84dc4b\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:48.522780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1045/exec-volume-test-preprovisionedpv-mvzq\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:48.903481       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-8658/pod-adoption-release\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:50.095938       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2505-4050/csi-mockplugin-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:50.323888       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8891/downwardapi-volume-69bbc578-41fe-4981-b9f1-646f533fb92d\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:50.411408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2505-4050/csi-mockplugin-attacher-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:50.927693       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2077-155/csi-mockplugin-0\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:51.237884       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2077-155/csi-mockplugin-attacher-0\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:51.820648       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4564-3783/csi-mockplugin-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:52.130514       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4564-3783/csi-mockplugin-resizer-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:52.530928       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7677/test-deployment-7b4c744884-n7dsj\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:52.532775       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7677/test-deployment-7b4c744884-n5gwk\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:52.577596       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-7941/pod-release-cw2dk\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:53.063194       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-7941/pod-release-rjtfk\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:53.231922       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2879/hostexec-ip-172-20-49-129.ap-northeast-2.compute.internal-ktgs8\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:54.225312       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-8658/pod-adoption-release-mkxwt\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:54.675713       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6031/hostpath-injector\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:55.756055       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7677/test-deployment-748588b7cd-c94rt\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:56.578386       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-4323/liveness-63ea2c44-1c72-42db-bbf5-5bb8ee5eb96f\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:56.698788       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-6363/sample-webhook-deployment-78988fc6cd-5dj6x\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:12:57.543153       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5979/pod-subpath-test-preprovisionedpv-s7hz\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:59.112307       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8456-2471/csi-hostpath-attacher-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:59.206946       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9296/hostexec-ip-172-20-49-129.ap-northeast-2.compute.internal-sr2pr\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:59.584644       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8456-2471/csi-hostpathplugin-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:12:59.923837       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8456-2471/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:00.085652       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3584/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-22kpz\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:00.133856       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-1605/forbid-27027793-q269g\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:00.198010       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7677/test-deployment-748588b7cd-7gqn8\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:00.211836       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7677/test-deployment-85d87c6f4b-v4kwv\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:00.245175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8456-2471/csi-hostpath-resizer-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:00.564482       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8456-2471/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:00.750916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-1240/pod-update-activedeadlineseconds-c14d8109-ea9f-4076-b398-510ba8afca43\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:01.292684       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-3212/downwardapi-volume-beddadfc-2b6b-4f86-b86c-3ee80b273da1\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:01.390519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2358/pod-90fdeb01-0f05-49db-93d9-2f68aed4a10a\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:01.911155       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2879/pod-subpath-test-preprovisionedpv-f289\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:03.141642       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1643/pod-subpath-test-preprovisionedpv-kc5m\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:03.250141       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6236/pod-subpath-test-preprovisionedpv-g658\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:04.197300       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7677/test-deployment-85d87c6f4b-fsr6t\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:06.414066       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3545/pvc-volume-tester-h2pt7\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:08.111953       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-8574/downward-api-9b470d72-496a-4ed8-b064-c25cef573057\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:08.389476       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-9035/pod-projected-secrets-4610ca46-6ad9-4805-8d51-19ffa8f48c86\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:08.977585       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6031/hostpath-client\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:09.149426       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7617/pod-ephm-test-projected-ll7n\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:10.202057       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-9820/busybox-scheduling-e7ae3989-8816-4c63-a53a-8b56bf9d44c4\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:10.295947       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7283-4471/csi-hostpath-attacher-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:10.784895       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7283-4471/csi-hostpathplugin-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:11.103061       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7283-4471/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:11.419082       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7283-4471/csi-hostpath-resizer-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:11.650525       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-1298/image-pull-test87e633ce-bd80-4b1f-b4cc-4e2061c4ab79\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:11.741976       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7283-4471/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:11.877542       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-6234/test-pod\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:12.704679       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5053/hostexec-ip-172-20-35-65.ap-northeast-2.compute.internal-lk8l4\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:13.087675       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"clientset-9332/poda30096be-f4b5-45ac-8d1f-1838ee311662\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:14.232439       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-2077/pvc-volume-tester-gfskx\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0522 07:13:16.134649       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-provisioning-4208/glusterdynamic-provisioner-nv5t4\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:16.851764       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7465/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-xjm7j\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:17.203814       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3584/pod-subpath-test-preprovisionedpv-k5s9\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:17.212321       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2572/hostexec-ip-172-20-63-92.ap-northeast-2.compute.internal-9nwft\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:19.994180       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-3782/pod-307f77eb-c59b-46a9-8515-a3d4a5e0505b\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:20.206972       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5053/pod-2e9cae31-8a38-49c9-aaaf-293652b68bab\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:21.772007       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4349/httpd\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:22.642784       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3584/pod-subpath-test-preprovisionedpv-k5s9\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:22.688781       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-4362/pod-e90ad87a-9b5d-411c-9f3e-350b155980f9\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:24.487061       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8502/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-78jw6\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:28.343622       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-977/alpine-nnp-true-7293462a-3a79-4dd5-81a1-cea7c8256cc7\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:28.816682       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-4615/deployment-585449566-s7tth\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:28.832248       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-4615/deployment-585449566-972qn\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:28.849189       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-4615/deployment-585449566-tcqng\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:28.986224       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-4615/deployment-55649fd747-vz6cv\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:31.031457       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-9471/annotationupdate3cb9e51b-3075-4487-8422-62b0f5b655d2\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:31.171255       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-9062/pod-configmaps-3b59a176-d34e-41a3-873e-a33386653845\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:32.061265       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-1514/labelsupdate2f3f5a2f-23e9-4fa2-ba8c-a4549bff125e\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:32.133544       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7465/exec-volume-test-preprovisionedpv-qks9\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:32.694264       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2572/local-injector\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:33.387256       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8502/pod-subpath-test-preprovisionedpv-zdd5\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:33.740684       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9216/aws-injector\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:37.484595       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-2484/pod-qos-class-e7825c6d-025e-463c-a3e2-0fc2eeffb6d0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:38.104245       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-5958/security-context-4870bfa1-7907-4530-9b13-53a9fd5bb186\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:38.337025       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8456/pod-subpath-test-dynamicpv-wdbf\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:38.755424       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7283/pod-subpath-test-dynamicpv-7m9t\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:38.956006       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-3/test-new-deployment-847dcfb7fb-fmz8h\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:39.344986       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-6047/pod-7a3bf59f-9f74-4953-a453-ce33660a284e\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:39.562679       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8597/netserver-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:39.727019       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8597/netserver-1\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:39.887179       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8597/netserver-2\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:40.052883       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8597/netserver-3\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:40.933485       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8932-5899/csi-hostpath-attacher-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:41.046554       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-53/hostexec-ip-172-20-49-129.ap-northeast-2.compute.internal-jkgz8\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:41.446611       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8932-5899/csi-hostpathplugin-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:41.776129       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8932-5899/csi-hostpath-provisioner-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:42.092807       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8932-5899/csi-hostpath-resizer-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:42.136651       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-6047/hostexec-ip-172-20-49-129.ap-northeast-2.compute.internal-bzcr4\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:42.440524       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8932-5899/csi-hostpath-snapshotter-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:43.740194       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-0\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:44.019596       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7109/netserver-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:44.180219       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7109/netserver-1\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:44.340444       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7109/netserver-2\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:44.502099       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7109/netserver-3\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:46.610044       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-1\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:46.884771       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-53/pod-49712495-5d39-43bc-a37c-9341a094e09c\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:46.959565       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1734-9962/csi-mockplugin-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:47.244703       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1734-9962/csi-mockplugin-attacher-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:47.979405       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2002/external-provisioner-hlz9b\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:48.361691       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-7607/client-containers-0964a683-32e5-43bf-8eaa-a69d08ff9bc3\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:48.688406       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-8712/alpine-nnp-nil-d355ea6c-0cf1-4e5e-ab48-1c6d29aa20e6\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:49.017557       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-4290/busybox-privileged-false-7445070d-e8c5-47c6-81c1-5f090ff86c8d\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:49.374860       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-2\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:51.239379       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6318/logs-generator\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:53.811755       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1734/pvc-volume-tester-fkh4d\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:54.007040       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2769/hostexec-ip-172-20-35-65.ap-northeast-2.compute.internal-mfq8p\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:54.765547       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-0\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:55.037622       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-2572/local-client\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:55.843019       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-53/pod-24078559-ee44-4b93-b8d6-1b007182d0ad\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:56.235165       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-411/pod-59581bcb-498d-48df-a4dd-25b6199ee0e4\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:58.240621       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1936/pod-c7f1ec7a-3f77-4b0e-a2b0-0a2bd1e79a0f\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:13:58.670684       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8932/pod-subpath-test-dynamicpv-xtj2\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:13:58.880350       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4564/pvc-volume-tester-ft928\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:00.552966       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-2562/security-context-cd753a07-a062-4495-8a79-f38e1cb824cd\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:01.676806       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8597/test-container-pod\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:01.762317       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2769/pod-subpath-test-preprovisionedpv-gn77\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:02.805309       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2505/pvc-volume-tester-nbs55\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:02.835444       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-659/pod-init-1d782546-5981-4bb3-ab9c-3be651f4aef9\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:04.059464       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9216/aws-client\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:04.821206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-2633/pfpod\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:04.821564       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7932/hostexec-ip-172-20-49-129.ap-northeast-2.compute.internal-zfwt5\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:05.768578       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2002/pod-subpath-test-dynamicpv-fqbz\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:06.999547       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7530/hostexec-ip-172-20-63-92.ap-northeast-2.compute.internal-x89kz\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:07.084778       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-1943/explicit-root-uid\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:08.292741       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-7109/test-container-pod\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:08.414531       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4948/netserver-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:08.577630       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4948/netserver-1\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:08.747312       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4948/netserver-2\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:08.991676       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4948/netserver-3\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:09.992526       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1876/ss2-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:10.013146       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-334/busybox-83ebd566-7c72-4dc8-ad2c-007ec5c8ed9d\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:11.072206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1936/pvc-volume-tester-writer-kh62d\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:11.430609       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8714-1327/csi-hostpath-attacher-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:11.906981       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8714-1327/csi-hostpathplugin-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:12.231822       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8714-1327/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:12.408904       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1876/ss2-1\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:12.469865       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-1\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:12.582194       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8714-1327/csi-hostpath-resizer-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:12.745436       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-250/pod-secrets-025ef31c-2128-4e04-aa8e-5e54a28eae5b\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:12.912014       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8714-1327/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:14.532562       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5281/pod-handle-http-request\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:14.801018       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-8937/pod-projected-configmaps-ebd2c9b7-21e9-4eac-9e2d-a8a89a8d2a4b\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:14.939618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9649/netserver-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:15.100747       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9649/netserver-1\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:15.259023       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9649/netserver-2\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:15.417979       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9649/netserver-3\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:16.477729       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-4254/pod-6a2e22d7-dbf1-45fb-a23f-99f14868a915\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:16.493043       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4564/pvc-volume-tester-dgmf7\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:17.629023       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6396/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-pmqzw\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:17.712079       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7932/pod-subpath-test-preprovisionedpv-bzkg\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:17.799604       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-2\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:18.034153       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7530/exec-volume-test-preprovisionedpv-cs67\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:18.375930       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8714/pod-subpath-test-dynamicpv-sd9b\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:18.700327       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-7633/liveness-ca7cbba4-0c77-42fb-ba45-6b0fb1e4d81b\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:19.178570       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1876/ss2-2\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:20.386025       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7106/downwardapi-volume-9d6e913e-1d8e-4c73-8c44-3694e18348ad\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:21.177052       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-5281/pod-with-poststart-exec-hook\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:21.391888       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-2975/test-cleanup-controller-4zbvh\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:24.964662       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5574/httpd\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:26.566288       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-0\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:27.354948       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7115/hostexec-ip-172-20-63-92.ap-northeast-2.compute.internal-glw7x\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:30.273785       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-8494/liveness-665be865-25ce-4237-9f78-84f650a21e91\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:30.365810       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-2975/test-cleanup-deployment-5b4d99b59b-ldxnx\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:31.165929       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6396/pod-f0bd77d7-0b1d-418d-af6e-67c646f5ff9e\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:31.922662       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6279/update-demo-nautilus-5l57d\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:31.929285       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6279/update-demo-nautilus-5fdhb\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:33.803012       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8898/hostexec-ip-172-20-63-92.ap-northeast-2.compute.internal-wrbc7\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:36.238515       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-721/external-provisioner-9bvqz\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:36.760123       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4948/test-container-pod\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:36.921525       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4948/host-test-container-pod\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:37.562019       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5764/external-provisioner-cgmlr\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:37.693977       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-697/externalname-service-nxt8q\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:37.716220       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-697/externalname-service-qzcnj\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:39.188618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-9649/test-container-pod\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:41.046193       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-697/execpodm6lq2\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:41.108245       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-8172/termination-message-container418a8d14-f34d-43b6-8797-0877c4d4b0c7\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:42.331458       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-1998/security-context-6cb5098b-add7-4ebe-9c31-eb89124b98c7\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:42.702052       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9561-5480/csi-mockplugin-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:42.895732       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1876/ss2-2\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:43.022708       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9561-5480/csi-mockplugin-attacher-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:44.476165       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-1\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:46.647031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9154/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-wh8hb\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:46.852771       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7115/pod-subpath-test-preprovisionedpv-sh6c\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:48.204341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8898/pod-subpath-test-preprovisionedpv-4fjx\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:49.011519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1936/pvc-volume-tester-reader-j28pg\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:49.155379       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2710/e2e-test-httpd-pod\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:49.635551       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3793-8037/csi-mockplugin-0\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:49.686978       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-721/pvc-volume-tester-writer-7ft2f\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0522 07:14:49.782034       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6895/exec-volume-test-dynamicpv-jpr2\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:50.346927       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-5081/pod-projected-configmaps-10df3442-b648-462a-8032-3af9ae084c45\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:51.028707       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4393-1952/csi-mockplugin-0\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:51.358461       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-672-4872/csi-hostpath-attacher-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:51.495278       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"provisioning-721/pvc-volume-tester-writer-7ft2f\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0522 07:14:51.850695       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-672-4872/csi-hostpathplugin-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:52.159439       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-672-4872/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:52.489060       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-672-4872/csi-hostpath-resizer-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:52.563127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1876/ss2-0\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:52.809148       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-672-4872/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:52.926031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-8502/pod-sharedvolume-9d9919f4-5ef8-4818-b6d3-57bcc8672416\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:53.128242       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5071/httpd\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:53.501311       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-721/pvc-volume-tester-writer-7ft2f\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:55.351418       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5764/exec-volume-test-dynamicpv-xvsw\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:55.492362       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-5189/ss2-2\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:56.055858       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6279/update-demo-nautilus-z2hdc\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:56.556756       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-9757/dns-test-b9736d8a-161a-452e-9e43-38675f1233d9\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:57.306889       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9006/external-provisioner-zgn26\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:57.854270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-5440/pod-secrets-481d3c1b-8fb3-4fe3-8483-316d093fa085\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:58.222431       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-672/hostpath-injector\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:14:59.015360       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-9040/downwardapi-volume-a6237617-e0de-4d51-9f46-7cb69a905173\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:14:59.937067       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9006/nfs-server\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:00.408261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9561/pvc-volume-tester-8p66t\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:01.815911       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-721/pvc-volume-tester-reader-nt9qs\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:01.879892       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-8727/pod-ephm-test-projected-wr8l\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:01.902211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-1025/pod-e53fc0bf-acdb-457c-9054-afede88aeace\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:02.463274       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9458/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-bbzg9\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:02.562826       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4393/pvc-volume-tester-8tqxp\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:02.761503       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9154/pod-subpath-test-preprovisionedpv-wfr4\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:03.371858       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3793/pvc-volume-tester-j879d\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:04.248926       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8916/frontend-685fc574d5-sxq6h\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:04.292434       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8916/frontend-685fc574d5-zfl5j\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:04.293230       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8916/frontend-685fc574d5-vsfw6\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:04.897235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-9561/inline-volume-mh2sn\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:05.095863       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8916/agnhost-primary-5db8ddd565-b8vf4\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:05.704726       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3271/hostexec-ip-172-20-48-92.ap-northeast-2.compute.internal-hmntx\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:05.980934       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8916/agnhost-replica-6bcf79b489-txdxl\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:05.986210       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8916/agnhost-replica-6bcf79b489-p8n9m\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:08.467567       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-8511/nfs-server\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:11.578616       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1876/ss2-2\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:12.847963       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-8258/downwardapi-volume-857fd22d-880b-4091-8c50-d3de533a35e7\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:14.010669       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9458/pod-9794b9f2-cbfc-4c44-80a8-caf5375dff7e\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:15.029520       1 factory.go:338] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-5879/inline-volume-ttbxg\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-ttbxg-my-volume\\\" not found.\"\nI0522 07:15:15.117571       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-2833/startup-9da2145f-6eb7-4879-912f-62e5e55175e4\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:17.983050       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7961/hairpin\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:18.128549       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-wrapper-3933/pod-secrets-4a9f9ba0-5049-4e4d-beb6-bb7a5cff90e2\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:18.578962       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3271/pod-subpath-test-preprovisionedpv-z86z\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:18.579204       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9006/pod-2264e4b9-0fa7-4ac7-9a3c-18f1054d979f\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:19.425522       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8940/up-down-1-tltps\" node=\"ip-172-20-35-65.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:19.452393       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8940/up-down-1-pwszm\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:19.456097       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8940/up-down-1-7lsn8\" node=\"ip-172-20-63-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0522 07:15:20.318083       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5879-6319/csi-hostpath-attacher-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:20.887728       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5879-6319/csi-hostpathplugin-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:21.212039       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5879-6319/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:21.438283       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-9006/hostexec-ip-172-20-49-129.ap-northeast-2.compute.internal-f4xkc\" node=\"ip-172-20-49-129.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:21.544743       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5879-6319/csi-hostpath-resizer-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:21.875574       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5879-6319/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-92.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0522 07:15:22.003253       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-4191/test-dns-nameservers\&